google / paxmlLinks
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
☆543Updated 3 weeks ago
Alternatives and similar repositories for paxml
Users that are interested in paxml are comparing it to the libraries listed below
Sorting:
- ☆191Updated 3 weeks ago
- ☆149Updated this week
- ☆341Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆436Updated 3 weeks ago
- JAX-Toolbox☆373Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆688Updated 2 weeks ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆474Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆398Updated 7 months ago
- ☆551Updated last year
- Library for reading and processing ML training data.☆643Updated last week
- seqax = sequence modeling + JAX☆169Updated 5 months ago
- ☆366Updated last year
- ☆287Updated last year
- JAX Synergistic Memory Inspector☆183Updated last year
- Train very large language models in Jax.☆210Updated 2 years ago
- Implementation of Flash Attention in Jax☆223Updated last year
- ☆294Updated this week
- CLU lets you write beautiful training loops in JAX.☆363Updated 2 weeks ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆411Updated this week
- JAX implementation of the Llama 2 model☆215Updated last year
- ☆26Updated 2 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆465Updated 2 weeks ago
- Minimal yet performant LLM examples in pure JAX☆225Updated last week
- ☆372Updated 2 months ago
- Inference code for LLaMA models in JAX☆120Updated last year
- JMP is a Mixed Precision library for JAX.☆210Updated 11 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆158Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated 3 weeks ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆328Updated last week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆406Updated this week