🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
☆282Nov 24, 2025Updated 3 months ago
Alternatives and similar repositories for fms-fsdp
Users that are interested in fms-fsdp are comparing it to the libraries listed below
Sorting:
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆220Updated this week
- ☆25Sep 9, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- 🚀 Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.☆13Jan 30, 2026Updated last month
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆487Updated this week
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- ☆92Jul 5, 2024Updated last year
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,116Aug 26, 2025Updated 6 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆698Jan 26, 2026Updated last month
- ☆20May 30, 2024Updated last year
- ☆19Dec 4, 2025Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,730Mar 14, 2026Updated last week
- ☆34Sep 10, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆649Jan 15, 2026Updated 2 months ago
- Checkpointable dataset utilities for foundation model training☆32Jan 29, 2024Updated 2 years ago
- Helpful tools and examples for working with flex-attention☆1,157Feb 8, 2026Updated last month
- ☆124May 28, 2024Updated last year
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- Multipack distributed sampler for fast padding-free training of LLMs☆206Aug 10, 2024Updated last year
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- 🚀 Collection of tuning recipes with HuggingFace SFTTrainer and PyTorch FSDP.☆56Mar 9, 2026Updated last week
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,757Jul 18, 2025Updated 8 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 6 months ago
- ☆20Jul 12, 2023Updated 2 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- PyTorch native post-training library☆5,703Mar 14, 2026Updated last week
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago