zju-jiyicheng / SpecVLMLinks
[EMNLP 2025 Main] SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning
☆30Updated 3 weeks ago
Alternatives and similar repositories for SpecVLM
Users that are interested in SpecVLM are comparing it to the libraries listed below
Sorting:
- Efficient Mixture of Experts for LLM Paper List☆153Updated 3 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆122Updated last month
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆67Updated 3 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆103Updated last year
- qwen-nsa☆86Updated 2 months ago
- [NAACL 2025🔥] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference☆15Updated 6 months ago
- ☆42Updated 9 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆190Updated last month
- Code release for VTW (AAAI 2025 Oral)☆65Updated last month
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆139Updated last week
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆20Updated 4 months ago
- 青稞Talk☆180Updated 3 weeks ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆159Updated 2 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆404Updated 9 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆32Updated 9 months ago
- ☆16Updated 10 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆181Updated 3 months ago
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆49Updated 4 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆628Updated 3 months ago
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆17Updated last year
- DeepSeek Native Sparse Attention pytorch implementation☆110Updated 2 weeks ago
- ☆297Updated 5 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆285Updated 2 months ago
- Fast, memory-efficient attention column reduction (e.g., sum, mean, max)☆29Updated 2 weeks ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆262Updated 5 months ago
- ☆106Updated this week
- ☆33Updated 9 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆60Updated 2 months ago
- ☆64Updated last year
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆97Updated 2 months ago