cloneofsimo / min-fsdp
☆73Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for min-fsdp
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated last week
- ☆53Updated 10 months ago
- ☆77Updated 5 months ago
- WIP☆89Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- Scalable neural net training via automatic normalization in the modular norm.☆121Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆58Updated 4 months ago
- A library for unit scaling in PyTorch☆105Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆66Updated 5 months ago
- ☆20Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆69Updated 3 months ago
- seqax = sequence modeling + JAX☆133Updated 4 months ago
- Minimal but scalable implementation of large language models in JAX☆26Updated 2 weeks ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆61Updated 7 months ago
- A simple library for scaling up JAX programs☆127Updated 2 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- Normalized Transformer (nGPT)☆66Updated this week
- ☆128Updated this week
- A set of Python scripts that makes your experience on TPU better☆40Updated 4 months ago
- ☆31Updated 2 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated 2 weeks ago
- ☆53Updated 3 weeks ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆71Updated last month
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆87Updated 3 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆54Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆80Updated 11 months ago