Large Context Attention
☆769Oct 13, 2025Updated 5 months ago
Alternatives and similar repositories for ringattention
Users that are interested in ringattention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 10 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆653Jan 15, 2026Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,246Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆46Nov 10, 2023Updated 2 years ago
- Fast and memory-efficient exact attention☆22,938Updated this week
- ring-attention experiments☆168Oct 17, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆755Sep 27, 2024Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Jun 16, 2023Updated 2 years ago
- Large World Model -- Modeling Text and Video with Millions Context☆7,402Oct 19, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,231Updated this week
- Helpful tools and examples for working with flex-attention☆1,161Feb 8, 2026Updated last month
- Distributed Compiler based on Triton for Parallel Systems☆1,398Mar 11, 2026Updated 2 weeks ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A PyTorch native platform for training generative AI models☆5,191Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,686Apr 17, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,324Mar 6, 2025Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- Minimalistic large language model 3D-parallelism training☆2,626Feb 19, 2026Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,672Mar 8, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Ongoing research training transformer models at scale☆15,827Updated this week
- Microsoft Automatic Mixed Precision Library☆635Dec 1, 2025Updated 3 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 7 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,209Jul 11, 2024Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆490Mar 19, 2024Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- Tile primitives for speedy kernels☆3,244Mar 17, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,869Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- Development repository for the Triton language and compiler☆18,781Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year