fattorib / ZeRO-transformerLinks
Two implementations of ZeRO-1 optimizer sharding in JAX
☆14Updated 2 years ago
Alternatives and similar repositories for ZeRO-transformer
Users that are interested in ZeRO-transformer are comparing it to the libraries listed below
Sorting:
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 9 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated last week
- ☆79Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- seqax = sequence modeling + JAX☆165Updated last month
- A library for unit scaling in PyTorch☆125Updated 7 months ago
- JAX bindings for Flash Attention v2☆90Updated 11 months ago
- A simple library for scaling up JAX programs☆139Updated 8 months ago
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- ☆186Updated last month
- Inference code for LLaMA models in JAX☆118Updated last year
- LoRA for arbitrary JAX models and functions☆140Updated last year
- ☆132Updated last week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆64Updated 3 months ago
- ☆61Updated 3 years ago
- Accelerated First Order Parallel Associative Scan☆182Updated 10 months ago
- ☆112Updated last year
- supporting pytorch FSDP for optimizers☆82Updated 7 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 8 months ago
- JAX implementation of the Llama 2 model☆219Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- ring-attention experiments☆144Updated 8 months ago
- Train very large language models in Jax.☆204Updated last year
- Implementation of Flash Attention in Jax☆213Updated last year
- If it quacks like a tensor...☆58Updated 8 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated 11 months ago
- A bunch of kernels that might make stuff slower 😉☆54Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆147Updated 2 weeks ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆85Updated last year