foundation-model-stack / fms-fsdp
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
☆244Updated this week
Alternatives and similar repositories for fms-fsdp:
Users that are interested in fms-fsdp are comparing it to the libraries listed below
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆194Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆262Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆212Updated 5 months ago
- ring-attention experiments☆132Updated 6 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆209Updated 8 months ago
- ☆181Updated 2 months ago
- Ring attention implementation with flash attention☆757Updated 3 weeks ago
- Fast low-bit matmul kernels in Triton☆295Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆511Updated 6 months ago
- Large Context Attention☆707Updated 3 months ago
- ☆202Updated last week
- ☆103Updated 11 months ago
- PyTorch per step fault tolerance (actively under development)☆291Updated this week
- LLM KV cache compression made easy☆471Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆120Updated this week
- Load compute kernels from the Hub☆115Updated last week
- Cataloging released Triton kernels.☆220Updated 3 months ago
- Explorations into some recent techniques surrounding speculative decoding☆261Updated 4 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated this week
- Efficient LLM Inference over Long Sequences☆372Updated this week
- 🔥 A minimal training framework for scaling FLA models☆117Updated this week
- Collection of kernels written in Triton language☆120Updated last month
- Megatron's multi-modal data loader☆195Updated this week
- ☆186Updated 7 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆346Updated 8 months ago
- ☆209Updated 3 months ago
- ☆104Updated 8 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆232Updated 2 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆488Updated 2 weeks ago