DFlash: Block Diffusion for Flash Speculative Decoding
☆560Feb 18, 2026Updated last week
Alternatives and similar repositories for dflash
Users that are interested in dflash are comparing it to the libraries listed below
Sorting:
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆138Dec 5, 2025Updated 2 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Jan 26, 2026Updated last month
- Fast, memory-efficient attention column reduction (e.g., sum, mean, max)☆37Feb 10, 2026Updated 2 weeks ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 3 months ago
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆93Feb 4, 2026Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆701Feb 14, 2026Updated 2 weeks ago
- a simple API to use CUPTI☆11Aug 19, 2025Updated 6 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- Run GEPA on your favorite non-python libraries.☆33Jan 22, 2026Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,201Feb 20, 2026Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆237Jun 15, 2025Updated 8 months ago
- ☆52May 19, 2025Updated 9 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- More reliable Video Understanding Evaluation☆14Sep 23, 2025Updated 5 months ago
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding☆34Jan 16, 2026Updated last month
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- ☆65Apr 26, 2025Updated 10 months ago
- ☆451Aug 10, 2025Updated 6 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆650Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- ☆66Jul 8, 2025Updated 7 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,009Feb 23, 2026Updated last week
- lite attention implemented over flash attention 3☆45Updated this week
- Official Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)☆54Mar 14, 2025Updated 11 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,182Jan 17, 2026Updated last month
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆58Oct 27, 2025Updated 4 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Oct 13, 2025Updated 4 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,073Apr 3, 2025Updated 10 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆128Updated this week
- ☆38Aug 7, 2025Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆4,381Updated this week
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,190Sep 30, 2025Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago