foundation-model-stack / fms-fsdpLinks
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
☆275Updated last month
Alternatives and similar repositories for fms-fsdp
Users that are interested in fms-fsdp are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hub☆352Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated 2 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆257Updated 2 months ago
- ring-attention experiments☆160Updated last year
- ☆225Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 7 months ago
- Large Context Attention☆757Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆413Updated last week
- Accelerating MoE with IO and Tile-aware Optimizations☆469Updated this week
- HuggingFace conversion and training library for Megatron-based models☆295Updated this week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆219Updated last year
- ☆121Updated last year
- Efficient LLM Inference over Long Sequences☆394Updated 6 months ago
- ☆268Updated this week
- Cataloging released Triton kernels.☆278Updated 3 months ago
- Megatron's multi-modal data loader☆297Updated last week
- LLM KV cache compression made easy☆729Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆459Updated 3 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆214Updated this week
- Explorations into some recent techniques surrounding speculative decoding☆295Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆140Updated last week
- Ring attention implementation with flash attention☆949Updated 3 months ago
- ☆568Updated 3 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆225Updated this week
- ☆206Updated 7 months ago
- A Quirky Assortment of CuTe Kernels☆724Updated this week
- Collection of kernels written in Triton language☆173Updated 8 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆263Updated this week