google / paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
☆485Updated last week
Alternatives and similar repositories for paxml:
Users that are interested in paxml are comparing it to the libraries listed below
- ☆185Updated last week
- ☆137Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆388Updated this week
- Orbax provides common checkpointing and persistence utilities for JAX users☆355Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆301Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆562Updated this week
- ☆292Updated this week
- JAX-Toolbox☆297Updated this week
- ☆344Updated 11 months ago
- Library for reading and processing ML training data.☆414Updated this week
- ☆412Updated 8 months ago
- seqax = sequence modeling + JAX☆151Updated 2 weeks ago
- ☆215Updated 8 months ago
- ☆197Updated last week
- Inference code for LLaMA models in JAX☆116Updated 10 months ago
- CLU lets you write beautiful training loops in JAX.☆335Updated 3 weeks ago
- ☆345Updated 2 weeks ago
- Train very large language models in Jax.☆203Updated last year
- Implementation of Flash Attention in Jax☆207Updated last year
- JMP is a Mixed Precision library for JAX.☆193Updated 2 months ago
- ☆221Updated last month
- ☆20Updated 3 weeks ago
- JAX Synergistic Memory Inspector☆171Updated 8 months ago
- A simple library for scaling up JAX programs☆134Updated 5 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆524Updated last month
- For optimization algorithm research and development.☆502Updated this week
- JAX implementation of the Llama 2 model☆216Updated last year
- Pipeline Parallelism for PyTorch☆761Updated 7 months ago
- Implementation of a Transformer, but completely in Triton☆263Updated 2 years ago