Large Context Attention
β770Oct 13, 2025Updated 6 months ago
Alternatives and similar repositories for ringattention
Users that are interested in ringattention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ring attention implementation with flash attentionβ1,006Sep 10, 2025Updated 7 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ548May 16, 2025Updated 11 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ222Aug 19, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ664Jan 15, 2026Updated 3 months ago
- β46Nov 10, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting β’ AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,280Updated this week
- ring-attention experimentsβ165Oct 17, 2024Updated last year
- Fast and memory-efficient exact attentionβ23,344Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β755Sep 27, 2024Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.β50Jun 16, 2023Updated 2 years ago
- Large World Model -- Modeling Text and Video with Millions Contextβ7,407Oct 19, 2024Updated last year
- FlashInfer: Kernel Library for LLM Servingβ5,372Apr 11, 2026Updated last week
- Helpful tools and examples for working with flex-attentionβ1,174Updated this week
- Distributed Compiler based on Triton for Parallel Systemsβ1,403Apr 10, 2026Updated last week
- Deploy open-source AI quickly and easily - Bonus Offer β’ AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- A PyTorch native platform for training generative AI modelsβ5,242Updated this week
- YaRN: Efficient Context Window Extension of Large Language Modelsβ1,695Apr 17, 2024Updated 2 years ago
- Transformer related optimization, including BERT, GPTβ6,412Mar 27, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelismβ452May 7, 2025Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,329Mar 6, 2025Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"β450Oct 16, 2024Updated last year
- Minimalistic large language model 3D-parallelism trainingβ2,654Apr 7, 2026Updated last week
- π Efficient implementations for emerging model architecturesβ4,878Updated this week
- Efficient Triton Kernels for LLM Trainingβ6,279Updated this week
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,675Mar 8, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)β209May 20, 2024Updated last year
- Ongoing research training transformer models at scaleβ16,073Updated this week
- Microsoft Automatic Mixed Precision Libraryβ636Dec 1, 2025Updated 4 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ7,211Jul 11, 2024Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ497Mar 19, 2024Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.β274Oct 3, 2025Updated 6 months ago
- Tile primitives for speedy kernelsβ3,312Apr 8, 2026Updated last week
- Linear Attention Sequence Parallelism (LASP)β88Jun 4, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean β’ AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Training and serving large-scale neural networks with auto parallelization.β3,187Dec 9, 2023Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Trainingβ1,872Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,244Aug 14, 2025Updated 8 months ago
- Development repository for the Triton language and compilerβ18,974Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,722Jun 25, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ132Dec 3, 2024Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencieβ¦β420Aug 21, 2025Updated 7 months ago