maomaocun / dLLM-cacheLinks
Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache).
☆72Updated this week
Alternatives and similar repositories for dLLM-cache
Users that are interested in dLLM-cache are comparing it to the libraries listed below
Sorting:
- ☆74Updated 2 weeks ago
- ☆78Updated last week
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆158Updated 3 weeks ago
- ☆83Updated last month
- ☆47Updated 2 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆97Updated 6 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆74Updated 5 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆51Updated this week
- ✈️ Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆67Updated 2 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆45Updated 3 months ago
- ☆93Updated 2 weeks ago
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆47Updated 2 months ago
- qwen-nsa☆66Updated last month
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆129Updated last week
- ☆77Updated this week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆84Updated 11 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆70Updated this week
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆89Updated last year
- ☆32Updated 2 weeks ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 6 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆21Updated 2 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- Efficient Mixture of Experts for LLM Paper List☆68Updated 5 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆103Updated 10 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆50Updated 9 months ago
- LongSpec: Long-Context Speculative Decoding with Efficient Drafting and Verification☆53Updated 3 months ago
- ☆74Updated 3 months ago
- 📚 Collection of token-level model compression resources.☆98Updated this week