zju-jiyicheng / SpecVLMView external linksLinks
[EMNLP 2025 Main] SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning
☆33Jan 11, 2026Updated last month
Alternatives and similar repositories for SpecVLM
Users that are interested in SpecVLM are comparing it to the libraries listed below
Sorting:
- Fast, memory-efficient attention column reduction (e.g., sum, mean, max)☆36Feb 10, 2026Updated last week
- ☆16Mar 24, 2025Updated 10 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆82Jan 17, 2026Updated last month
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆67May 15, 2025Updated 9 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆129Nov 26, 2025Updated 2 months ago
- ☆42Mar 15, 2025Updated 11 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆38Jan 27, 2026Updated 3 weeks ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆113Dec 12, 2025Updated 2 months ago
- ☆13Nov 15, 2017Updated 8 years ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.☆86Sep 20, 2025Updated 4 months ago
- ☆13Jul 3, 2024Updated last year
- ☆15Jan 27, 2026Updated 3 weeks ago
- ☆29Nov 18, 2025Updated 2 months ago
- ☆13Jan 7, 2025Updated last year
- ☆20Nov 21, 2025Updated 2 months ago
- ☆15Sep 11, 2025Updated 5 months ago
- ☆13May 15, 2025Updated 9 months ago
- https://avocado-captioner.github.io/☆29Oct 16, 2025Updated 4 months ago
- The official implement of "Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings"☆18Dec 5, 2024Updated last year
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- ☆12Apr 9, 2025Updated 10 months ago
- LLaVA-Next for STVG☆18Dec 5, 2025Updated 2 months ago
- ☆25Oct 11, 2025Updated 4 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆197Nov 17, 2025Updated 3 months ago
- a mllm inference engine for academic research☆19Jan 30, 2026Updated 2 weeks ago
- The official implementation of "Test-time Adaptation for Regression by Subspace Alignment" (ICLR 2025).☆14Jun 6, 2025Updated 8 months ago
- (NeurIPS 2025 🔥) Official implementation for "Efficient Multi-modal Large Language Models via Progressive Consistency Distillation"☆41Updated this week
- DEDISbench: A disk I/O block-based benchmark for deduplication systems. Unlike other existing benchmarks, written content is generated i…☆14Jul 22, 2021Updated 4 years ago
- ☆16Jul 12, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"☆24Mar 4, 2025Updated 11 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆65Nov 1, 2024Updated last year
- Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning☆29Sep 12, 2025Updated 5 months ago
- Yan (炎) is a high-performance CUDA operator library designed for learning purposes while emphasizing clean code and maximum performance.☆18Jul 21, 2025Updated 6 months ago
- ROSA-Tuning☆66Feb 4, 2026Updated last week
- 支持GPU全链路加速的全同态加密(FHE)框架☆20Apr 18, 2025Updated 9 months ago
- ☆17Feb 18, 2025Updated 11 months ago
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆23Sep 15, 2025Updated 5 months ago
- [ICLR'25] Streaming Video Question-Answering with In-context Video KV-Cache Retrieval☆101Nov 4, 2025Updated 3 months ago