z-lab / dflashLinks
Block Diffusion for Ultra-Fast Speculative Decoding
☆459Updated this week
Alternatives and similar repositories for dflash
Users that are interested in dflash are comparing it to the libraries listed below
Sorting:
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆410Updated last month
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 2 weeks ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆481Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Updated 3 months ago
- Efficient triton implementation of Native Sparse Attention.☆262Updated 8 months ago
- KV cache compression for high-throughput LLM inference☆153Updated last year
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆495Updated 2 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Updated 11 months ago
- Code for data-aware compression of DeepSeek models☆69Updated last month
- ☆449Updated 5 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆902Updated this week
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆116Updated 2 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆55Updated 3 months ago
- ☆131Updated 8 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Updated 7 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆141Updated last year
- ☆221Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆825Updated last week
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆830Updated this week
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆63Updated 3 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago
- 🔥 A minimal training framework for scaling FLA models☆343Updated 2 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆196Updated 3 weeks ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆192Updated 4 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆204Updated 2 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆289Updated 3 months ago