foundation-model-stack / fms-fsdpLinks
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
☆255Updated this week
Alternatives and similar repositories for fms-fsdp
Users that are interested in fms-fsdp are comparing it to the libraries listed below
Sorting:
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆205Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆224Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆281Updated last month
- Load compute kernels from the Hub☆203Updated this week
- ring-attention experiments☆144Updated 8 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆529Updated last month
- Scalable toolkit for efficient model reinforcement☆499Updated this week
- A Quirky Assortment of CuTe Kernels☆281Updated this week
- Large Context Attention☆718Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆330Updated this week
- Cataloging released Triton kernels.☆242Updated 6 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆212Updated 10 months ago
- LLM KV cache compression made easy☆535Updated this week
- Efficient LLM Inference over Long Sequences☆382Updated 2 weeks ago
- ☆198Updated 5 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆359Updated 2 weeks ago
- ☆225Updated this week
- Explorations into some recent techniques surrounding speculative decoding☆272Updated 6 months ago
- Collection of kernels written in Triton language☆136Updated 3 months ago
- Megatron's multi-modal data loader☆219Updated this week
- Ring attention implementation with flash attention☆800Updated last week
- ☆112Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆561Updated 3 weeks ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆360Updated 11 months ago
- 🔥 A minimal training framework for scaling FLA models☆188Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆156Updated this week
- ☆116Updated last month
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆318Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆135Updated this week