NVIDIA / jaxppLinks
JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training
☆45Updated 2 weeks ago
Alternatives and similar repositories for jaxpp
Users that are interested in jaxpp are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆87Updated 2 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆153Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆88Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- ☆80Updated 7 months ago
- A bunch of kernels that might make stuff slower 😉☆48Updated this week
- DeeperGEMM: crazy optimized version☆69Updated last month
- Framework to reduce autotune overhead to zero for well known deployments.☆74Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆28Updated 4 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- Applied AI experiments and examples for PyTorch☆274Updated last week
- ☆71Updated 2 weeks ago
- ☆215Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- ☆34Updated last week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last week
- PyTorch Single Controller☆16Updated this week
- FlexAttention w/ FlashAttention3 Support☆26Updated 8 months ago
- ☆49Updated 2 weeks ago
- Make triton easier☆47Updated 11 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆121Updated last week
- ☆105Updated 9 months ago
- ☆59Updated last month
- ☆88Updated 5 months ago
- ring-attention experiments☆145Updated 7 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆107Updated 2 months ago
- Collection of kernels written in Triton language☆127Updated 2 months ago