attention-survey / Efficient_Attention_SurveyView external linksLinks
A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention
☆281Dec 1, 2025Updated 2 months ago
Alternatives and similar repositories for Efficient_Attention_Survey
Users that are interested in Efficient_Attention_Survey are comparing it to the libraries listed below
Sorting:
- ☆223Nov 19, 2025Updated 2 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆87Nov 29, 2025Updated 2 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆939Dec 31, 2025Updated last month
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆88Dec 2, 2025Updated 2 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆57Feb 2, 2026Updated 2 weeks ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆264Jan 17, 2026Updated last month
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆61Mar 25, 2025Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆455Jan 18, 2026Updated 3 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- qwen-nsa☆87Oct 14, 2025Updated 4 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆149Mar 21, 2025Updated 10 months ago
- ☆11Jan 10, 2025Updated last year
- ☆52May 19, 2025Updated 8 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆627Feb 3, 2026Updated 2 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,159Jan 17, 2026Updated last month
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆639Updated this week
- ☆130Aug 18, 2025Updated 5 months ago
- Code Repository of Evaluating Quantized Large Language Models☆135Sep 8, 2024Updated last year
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆43Nov 19, 2025Updated 2 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆249Dec 16, 2024Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆630Dec 28, 2025Updated last month
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- ☆41Oct 15, 2025Updated 4 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆164Updated this week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆62Jul 1, 2025Updated 7 months ago
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆122Jan 1, 2026Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- ☆15Jan 21, 2026Updated 3 weeks ago