google / paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
☆489Updated last week
Alternatives and similar repositories for paxml:
Users that are interested in paxml are comparing it to the libraries listed below
- ☆186Updated last week
- Orbax provides common checkpointing and persistence utilities for JAX users☆370Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆390Updated 2 weeks ago
- ☆138Updated last week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆568Updated this week
- ☆295Updated last week
- JAX-Toolbox☆299Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆317Updated this week
- ☆349Updated last year
- seqax = sequence modeling + JAX☆154Updated 2 weeks ago
- ☆424Updated 9 months ago
- Library for reading and processing ML training data.☆432Updated this week
- ☆216Updated 9 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆534Updated this week
- Inference code for LLaMA models in JAX☆118Updated 11 months ago
- CLU lets you write beautiful training loops in JAX.☆337Updated 2 weeks ago
- JMP is a Mixed Precision library for JAX.☆194Updated 2 months ago
- Implementation of Flash Attention in Jax☆206Updated last year
- JAX Synergistic Memory Inspector☆172Updated 9 months ago
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆347Updated this week
- Train very large language models in Jax.☆204Updated last year
- For optimization algorithm research and development.☆508Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆240Updated last week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆359Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 8 months ago
- ☆201Updated this week
- PyTorch per step fault tolerance (actively under development)☆284Updated this week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆375Updated last week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆246Updated this week