DFlash: Block Diffusion for Flash Speculative Decoding
☆2,450Apr 26, 2026Updated last week
Alternatives and similar repositories for dflash
Users that are interested in dflash are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fast, memory-efficient attention column reduction (e.g., sum, mean, max)☆46Feb 10, 2026Updated 2 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆163Feb 27, 2026Updated 2 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆52Jul 4, 2025Updated 9 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆801Apr 2, 2026Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆540Feb 10, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- a simple API to use CUPTI☆10Aug 19, 2025Updated 8 months ago
- Code for the papers: “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling” and “Adaptive Block-Scaled Data Types”☆173Apr 21, 2026Updated last week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,313Feb 20, 2026Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- [ICLR 2026 Oral] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆100Mar 12, 2026Updated last month
- ☆52May 19, 2025Updated 11 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,210Apr 8, 2026Updated 3 weeks ago
- ☆66Apr 26, 2025Updated last year
- More reliable Video Understanding Evaluation☆15Sep 23, 2025Updated 7 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- A sparse attention kernel supporting mix sparse patterns☆503Jan 18, 2026Updated 3 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 5 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆382Jul 10, 2025Updated 9 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆98Updated this week
- LLaDA2.0 is the diffusion language model series developed by InclusionAI team, Ant Group.☆406Feb 12, 2026Updated 2 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,999Updated this week
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆27Feb 21, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆294Dec 1, 2025Updated 5 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,928Updated this week
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆57Mar 31, 2026Updated last month
- A record of reading list on some MLsys popular topic☆24Mar 20, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- Supercharge Your LLM with the Fastest KV Cache Layer☆8,132Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.☆4,094Mar 13, 2026Updated last month
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆777Aug 14, 2025Updated 8 months ago
- ☆453Aug 10, 2025Updated 8 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆46Jul 17, 2025Updated 9 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,108Apr 3, 2025Updated last year
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆120Apr 25, 2026Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆294Apr 23, 2026Updated last week