A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training
☆795Apr 21, 2026Updated 2 weeks ago
Alternatives and similar repositories for MagiAttention
Users that are interested in MagiAttention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- MAGI-1: Autoregressive Video Generation at Scale☆3,685Jun 17, 2025Updated 10 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated 2 weeks ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,297Aug 28, 2025Updated 8 months ago
- ☆66Apr 26, 2025Updated last year
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,610Apr 27, 2026Updated last week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- A unified inference and post-training framework for accelerated video generation.☆3,446Updated this week
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆993Feb 5, 2026Updated 3 months ago
- [ICCV 2025] Official repo for "GigaTok: Scaling Visual Tokenizers to 3 Billion Parameters for Autoregressive Image Generation"☆203Jan 7, 2026Updated 3 months ago
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆662Mar 6, 2026Updated last month
- ☆52May 19, 2025Updated 11 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆52Jul 4, 2025Updated 10 months ago
- ☆119May 19, 2025Updated 11 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,888Updated this week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- flex-block-attn: an efficient block sparse attention computation library☆130Dec 26, 2025Updated 4 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆991Feb 25, 2026Updated 2 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,342Jan 17, 2026Updated 3 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- Perplexity GPU Kernels☆570Nov 7, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,065Sep 4, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆482May 30, 2025Updated 11 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Muon is Scalable for LLM Training☆1,469Aug 3, 2025Updated 9 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆503Jan 18, 2026Updated 3 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,928Updated this week
- ☆265Jul 11, 2024Updated last year
- VideoSys: An easy and efficient system for video generation☆2,021Aug 27, 2025Updated 8 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 10 months ago