Large Context Attention
☆769Oct 13, 2025Updated 6 months ago
Alternatives and similar repositories for ringattention
Users that are interested in ringattention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- ☆47Nov 10, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312May 1, 2026Updated last week
- ring-attention experiments☆166Oct 17, 2024Updated last year
- Fast and memory-efficient exact attention☆23,628Updated this week
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Jun 16, 2023Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆757Sep 27, 2024Updated last year
- Large World Model -- Modeling Text and Video with Millions Context☆7,408Oct 19, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- FlashInfer: Kernel Library for LLM Serving☆5,544May 2, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated 2 weeks ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A PyTorch native platform for training generative AI models☆5,309Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,710Apr 17, 2024Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,678Apr 7, 2026Updated last month
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week
- Efficient Triton Kernels for LLM Training☆6,331Apr 30, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Ongoing research training transformer models at scale☆16,253Updated this week
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 5 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,225Jul 11, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,297Aug 28, 2025Updated 8 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆497Mar 19, 2024Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 7 months ago
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,878Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,246Aug 14, 2025Updated 8 months ago
- Development repository for the Triton language and compiler☆19,124Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year