π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
β280Nov 24, 2025Updated 3 months ago
Alternatives and similar repositories for fms-fsdp
Users that are interested in fms-fsdp are comparing it to the libraries listed below
Sorting:
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β219Feb 16, 2026Updated last week
- β24Sep 9, 2024Updated last year
- Applied AI experiments and examples for PyTorchβ318Aug 22, 2025Updated 6 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)β478Feb 3, 2026Updated 3 weeks ago
- β20May 30, 2024Updated last year
- A PyTorch native platform for training generative AI modelsβ5,098Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,569Feb 19, 2026Updated last week
- π Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.β13Jan 30, 2026Updated last month
- β93Jul 5, 2024Updated last year
- Parallel Associative Scan for Language Modelsβ18Jan 8, 2024Updated 2 years ago
- β124May 28, 2024Updated last year
- Pipeline Parallelism for PyTorchβ785Aug 21, 2024Updated last year
- Ring attention implementation with flash attentionβ986Sep 10, 2025Updated 5 months ago
- Checkpointable dataset utilities for foundation model trainingβ32Jan 29, 2024Updated 2 years ago
- β19Dec 4, 2025Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ693Jan 26, 2026Updated last month
- PyTorch native quantization and sparsity for training and inferenceβ2,696Feb 22, 2026Updated last week
- Helpful tools and examples for working with flex-attentionβ1,136Feb 8, 2026Updated 2 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ2,090Aug 26, 2025Updated 6 months ago
- β36Feb 26, 2024Updated 2 years ago
- β34Sep 10, 2024Updated last year
- Large Context Attentionβ769Oct 13, 2025Updated 4 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ644Jan 15, 2026Updated last month
- π Efficient implementations of state-of-the-art linear attention modelsβ4,428Updated this week
- Simple and efficient pytorch-native transformer training and inference (batched)β79Apr 2, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansionβ56Aug 20, 2024Updated last year
- β24Sep 25, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.β266Oct 3, 2025Updated 4 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLsβ938Nov 27, 2025Updated 3 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Apr 17, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)β24Jun 6, 2024Updated last year
- FlashInfer: Kernel Library for LLM Servingβ5,009Updated this week
- β11Oct 11, 2023Updated 2 years ago
- Advanced Formal Language Theory (263-5352-00L; FrΓΌhjahr 2023)β10Feb 21, 2023Updated 3 years ago
- Multipack distributed sampler for fast padding-free training of LLMsβ204Aug 10, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β595Aug 12, 2025Updated 6 months ago
- train with kittens!β63Oct 25, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.β96Sep 19, 2025Updated 5 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"β27Apr 17, 2024Updated last year