google / paxmlLinks
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
☆542Updated last week
Alternatives and similar repositories for paxml
Users that are interested in paxml are comparing it to the libraries listed below
Sorting:
- ☆190Updated last week
- ☆148Updated last month
- ☆340Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆436Updated last week
- JAX-Toolbox☆368Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆687Updated 3 weeks ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆469Updated this week
- ☆364Updated last year
- ☆547Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆393Updated 6 months ago
- Library for reading and processing ML training data.☆626Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆456Updated 2 weeks ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆406Updated last week
- seqax = sequence modeling + JAX☆169Updated 4 months ago
- ☆285Updated last year
- JAX Synergistic Memory Inspector☆183Updated last year
- Implementation of Flash Attention in Jax☆222Updated last year
- JAX implementation of the Llama 2 model☆216Updated last year
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆369Updated last month
- CLU lets you write beautiful training loops in JAX.☆360Updated 5 months ago
- A library to analyze PyTorch traces.☆448Updated last month
- Train very large language models in Jax.☆210Updated 2 years ago
- ☆290Updated this week
- ☆25Updated last week
- Minimal yet performant LLM examples in pure JAX☆214Updated 2 weeks ago
- ☆234Updated 10 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆404Updated last week
- Pipeline Parallelism for PyTorch☆783Updated last year
- For optimization algorithm research and development.☆552Updated last week