foundation-model-stack / fms-fsdpLinks
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
☆265Updated last month
Alternatives and similar repositories for fms-fsdp
Users that are interested in fms-fsdp are comparing it to the libraries listed below
Sorting:
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆209Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆238Updated 2 weeks ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆537Updated 3 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Applied AI experiments and examples for PyTorch☆295Updated 3 weeks ago
- Load compute kernels from the Hub☆271Updated this week
- ☆216Updated 7 months ago
- ring-attention experiments☆150Updated 10 months ago
- Large Context Attention☆736Updated 7 months ago
- Fast low-bit matmul kernels in Triton☆357Updated this week
- A Quirky Assortment of CuTe Kernels☆450Updated last week
- Efficient LLM Inference over Long Sequences☆391Updated 2 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- Megatron's multi-modal data loader☆243Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆395Updated 2 weeks ago
- LLM KV cache compression made easy☆604Updated this week
- ☆118Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆285Updated 8 months ago
- Collection of kernels written in Triton language☆154Updated 5 months ago
- Cataloging released Triton kernels.☆252Updated this week
- Ring attention implementation with flash attention☆864Updated last month
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆86Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆575Updated last month
- 🔥 A minimal training framework for scaling FLA models☆239Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆245Updated 7 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆157Updated this week
- ☆233Updated 3 weeks ago
- Microsoft Automatic Mixed Precision Library☆619Updated 11 months ago
- ☆197Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆181Updated 2 months ago