DFlash: Block Diffusion for Flash Speculative Decoding
☆634Mar 15, 2026Updated last week
Alternatives and similar repositories for dflash
Users that are interested in dflash are comparing it to the libraries listed below
Sorting:
- Fast, memory-efficient attention column reduction (e.g., sum, mean, max)☆42Feb 10, 2026Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆156Feb 27, 2026Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆730Mar 14, 2026Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- [ICLR 2026 Oral] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆92Mar 12, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆531Feb 10, 2025Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,229Feb 20, 2026Updated last month
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆138Mar 7, 2026Updated 2 weeks ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆91Jan 26, 2026Updated last month
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Updated this week
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆110Mar 15, 2026Updated last week
- ☆453Aug 10, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated last week
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆46Jul 17, 2025Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆242Jun 15, 2025Updated 9 months ago
- ☆52May 19, 2025Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆287Dec 1, 2025Updated 3 months ago
- Official Implementation of DART (DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference).☆45Feb 8, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆5,145Mar 15, 2026Updated last week
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Run GEPA on your favorite non-python libraries.☆33Jan 22, 2026Updated 2 months ago
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- More reliable Video Understanding Evaluation☆14Sep 23, 2025Updated 5 months ago
- Mixture-of-Basis-Experts for Compressing MoE-based LLMs☆30Dec 24, 2025Updated 2 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- [ICLR 2025] Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching☆20Apr 21, 2025Updated 11 months ago
- [ICLR 2026] ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference☆134Mar 14, 2026Updated last week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆273Jul 6, 2025Updated 8 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,719Jun 25, 2024Updated last year