lucidrains / ring-attention-pytorchView external linksLinks
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
☆549May 16, 2025Updated 9 months ago
Alternatives and similar repositories for ring-attention-pytorch
Users that are interested in ring-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- Large Context Attention☆766Oct 13, 2025Updated 4 months ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated last month
- Implementation of Infini-Transformer in Pytorch☆112Jan 4, 2025Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Aug 18, 2024Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆135Oct 15, 2025Updated 4 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- A PyTorch native platform for training generative AI models☆5,069Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Dec 22, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆752Sep 27, 2024Updated last year
- Tile primitives for speedy kernels☆3,139Feb 10, 2026Updated last week
- Efficient Triton Kernels for LLM Training☆6,141Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,163Updated this week
- Minimalistic large language model 3D-parallelism training☆2,559Updated this week
- Helpful tools and examples for working with flex-attention☆1,127Feb 8, 2026Updated last week
- Microsoft Automatic Mixed Precision Library☆635Dec 1, 2025Updated 2 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 weeks ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆4,935Feb 10, 2026Updated last week
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated last month
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Mar 11, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Fast and memory-efficient exact attention☆22,231Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,691Updated this week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Jun 3, 2025Updated 8 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆342Dec 28, 2024Updated last year
- ☆45Nov 10, 2023Updated 2 years ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Sep 12, 2024Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,180Aug 22, 2025Updated 5 months ago
- Annotated version of the Mamba paper☆496Feb 27, 2024Updated last year
- Implementation of Flash Attention in Jax☆225Mar 1, 2024Updated last year
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆182Jun 20, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week