A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention
☆289Dec 1, 2025Updated 3 months ago
Alternatives and similar repositories for Efficient_Attention_Survey
Users that are interested in Efficient_Attention_Survey are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆292Feb 24, 2026Updated last month
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆90Nov 29, 2025Updated 4 months ago
- ☆240Nov 19, 2025Updated 4 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆969Feb 25, 2026Updated last month
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆96Dec 2, 2025Updated 3 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆648Mar 6, 2026Updated 3 weeks ago
- A sparse attention kernel supporting mix sparse patterns☆485Jan 18, 2026Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆978Feb 5, 2026Updated last month
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆63Jul 1, 2025Updated 8 months ago
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆46Mar 6, 2026Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆532Feb 10, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- qwen-nsa☆87Oct 14, 2025Updated 5 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆96Feb 2, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆52May 19, 2025Updated 10 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆154Mar 21, 2025Updated last year
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆167Oct 13, 2025Updated 5 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,249Jan 17, 2026Updated 2 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆723Updated this week
- ☆140Aug 18, 2025Updated 7 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆251Dec 16, 2024Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆654Mar 17, 2026Updated last week
- Accelerating MoE with IO and Tile-aware Optimizations☆613Mar 17, 2026Updated last week
- ☆36Dec 9, 2025Updated 3 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 3 months ago
- [ICLR 2026 Oral] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆94Mar 12, 2026Updated 2 weeks ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆274Jul 6, 2025Updated 8 months ago
- [ACL2025 Oral🔥]Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling☆27Nov 11, 2025Updated 4 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆11Jan 10, 2025Updated last year
- 📚 A curated list of Awesome Efficient dLLMs Papers with Codes☆137Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆678Feb 24, 2026Updated last month
- Code Repository of Evaluating Quantized Large Language Models☆135Sep 8, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago