lucidrains / ring-attention-pytorch
Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
β506Updated 4 months ago
Alternatives and similar repositories for ring-attention-pytorch:
Users that are interested in ring-attention-pytorch are comparing it to the libraries listed below
- Large Context Attentionβ690Updated last month
- Helpful tools and examples for working with flex-attentionβ695Updated this week
- Ring attention implementation with flash attentionβ711Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β524Updated last month
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β232Updated 2 weeks ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ447Updated last month
- LLM KV cache compression made easyβ440Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β705Updated 5 months ago
- β393Updated 2 weeks ago
- This repository contains the experimental PyTorch native float8 training UXβ222Updated 7 months ago
- Explorations into some recent techniques surrounding speculative decodingβ248Updated 3 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ505Updated 2 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ318Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β223Updated last month
- [ICML 2024] CLLMs: Consistency Large Language Modelsβ386Updated 4 months ago
- Microsoft Automatic Mixed Precision Libraryβ581Updated 5 months ago
- Triton-based implementation of Sparse Mixture of Experts.β207Updated 3 months ago
- Pipeline Parallelism for PyTorchβ757Updated 7 months ago
- β218Updated 9 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β223Updated last month
- Efficient LLM Inference over Long Sequencesβ365Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ336Updated 7 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,216Updated 2 weeks ago
- Annotated version of the Mamba paperβ475Updated last year
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformersβ207Updated 7 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ435Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ276Updated this week
- β140Updated last year
- Megatron's multi-modal data loaderβ181Updated this week
- ring-attention experimentsβ127Updated 5 months ago