horseee / dKV-CacheLinks
[NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models
☆110Updated 5 months ago
Alternatives and similar repositories for dKV-Cache
Users that are interested in dKV-Cache are comparing it to the libraries listed below
Sorting:
- Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆74Updated 3 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- [NeurIPS'25] The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Tok…☆54Updated last week
- ☆61Updated 3 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆171Updated last month
- A Collection of Papers on Diffusion Language Models☆134Updated last month
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆59Updated last month
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Updated 7 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆177Updated last month
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆108Updated 3 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆58Updated 8 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆60Updated 3 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆242Updated 3 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆289Updated last week
- ✈️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆75Updated 3 months ago
- [NeurIPS 2025] ScaleKV: Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression☆49Updated 4 months ago
- ☆70Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆121Updated 4 months ago
- Data distillation benchmark☆68Updated 4 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆24Updated 8 months ago
- ☆35Updated 6 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆49Updated last year
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆29Updated 7 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆97Updated 10 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆122Updated 4 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆131Updated 3 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆67Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated 3 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆94Updated 11 months ago