A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention
☆292Dec 1, 2025Updated 4 months ago
Alternatives and similar repositories for Efficient_Attention_Survey
Users that are interested in Efficient_Attention_Survey are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆295Feb 24, 2026Updated last month
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆91Nov 29, 2025Updated 4 months ago
- ☆241Nov 19, 2025Updated 5 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆976Feb 25, 2026Updated last month
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆98Dec 2, 2025Updated 4 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆655Mar 6, 2026Updated last month
- A sparse attention kernel supporting mix sparse patterns☆497Jan 18, 2026Updated 3 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆984Feb 5, 2026Updated 2 months ago
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆48Mar 30, 2026Updated 2 weeks ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆65Jul 1, 2025Updated 9 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- qwen-nsa☆87Oct 14, 2025Updated 6 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆51May 19, 2025Updated 11 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,296Jan 17, 2026Updated 3 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆781Apr 8, 2026Updated last week
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆111Updated this week
- ☆140Aug 18, 2025Updated 8 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆64Mar 25, 2025Updated last year
- ☆67Apr 26, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆672Mar 17, 2026Updated last month
- Accelerating MoE with IO and Tile-aware Optimizations☆630Apr 1, 2026Updated 2 weeks ago
- ☆36Dec 9, 2025Updated 4 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆68Dec 11, 2025Updated 4 months ago
- [ICLR 2026 Oral] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆94Mar 12, 2026Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- [ACL2025 Oral🔥]Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling☆27Nov 11, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆11Jan 10, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆689Updated this week
- Code Repository of Evaluating Quantized Large Language Models☆134Sep 8, 2024Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 5 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year