Tencent-Hunyuan / flex-block-attnLinks
flex-block-attn: an efficient block sparse attention computation library
☆65Updated this week
Alternatives and similar repositories for flex-block-attn
Users that are interested in flex-block-attn are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact kmeans☆126Updated last week
- ☆46Updated 3 weeks ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 9 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 10 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆90Updated 3 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆48Updated 3 weeks ago
- Code for Draft Attention☆93Updated 6 months ago
- ☆187Updated 10 months ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆55Updated 4 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆24Updated 9 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆60Updated 4 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆202Updated 2 months ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆140Updated last week
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Updated last month
- A parallelism VAE avoids OOM for high resolution image generation☆83Updated 3 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆51Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆254Updated 4 months ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆93Updated 2 months ago
- ☆121Updated 3 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆45Updated 2 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆103Updated 5 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆73Updated last year
- ☆143Updated last week
- Efficient triton implementation of Native Sparse Attention.☆248Updated 6 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆133Updated 3 weeks ago
- To pioneer training long-context multi-modal transformer models☆62Updated 3 months ago
- ☆254Updated 5 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 4 months ago