USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference
☆666Jan 15, 2026Updated 3 months ago
Alternatives and similar repositories for long-context-attention
Users that are interested in long-context-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ring attention implementation with flash attention☆1,014Sep 10, 2025Updated 7 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,610Apr 27, 2026Updated last week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- Large Context Attention☆770Oct 13, 2025Updated 6 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆757Sep 27, 2024Updated last year
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,295Aug 28, 2025Updated 8 months ago
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 3 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,888Updated this week
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,009Mar 3, 2026Updated 2 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆795Apr 21, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆186Apr 8, 2026Updated 3 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,036Updated this week
- VideoSys: An easy and efficient system for video generation☆2,023Aug 27, 2025Updated 8 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆91Apr 21, 2026Updated last week
- Ongoing research training transformer models at scale☆16,203Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 9 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆419Aug 21, 2025Updated 8 months ago
- ☆132Nov 11, 2024Updated last year
- A lightweight design for computation-communication overlap.☆229Jan 20, 2026Updated 3 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆58Jul 23, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆4,999Apr 27, 2026Updated last week
- ☆78May 4, 2021Updated 5 years ago
- Tile primitives for speedy kernels☆3,336Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A PyTorch native platform for training generative AI models☆5,286Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆507Updated this week
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆482May 30, 2025Updated 11 months ago
- ring-attention experiments☆166Oct 17, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago