AIoT-MLSys-Lab / MEDALinks
[NAACL 2025π₯] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference
β17Updated 7 months ago
Alternatives and similar repositories for MEDA
Users that are interested in MEDA are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Contβ¦β70Updated 4 months ago
- Code release for VTW (AAAI 2025 Oral)β64Updated 3 months ago
- [EMNLP 2024 Findingsπ₯] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inβ¦β104Updated last year
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β105Updated 7 months ago
- β33Updated 8 months ago
- β64Updated 2 weeks ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ37Updated last year
- [EMNLP 2025 main π₯] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"β103Updated 3 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.β84Updated 3 months ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.β86Updated 4 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Spβ¦β237Updated last month
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ141Updated 11 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Modelsβ38Updated 2 weeks ago
- β113Updated 4 months ago
- Code for paper: Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inferenceβ12Updated 8 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Modelsβ65Updated 2 months ago
- β¨β¨[AAAI 2026] This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Viβ¦β77Updated 9 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Modelsβ99Updated 2 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ60Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuningβ91Updated 11 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoningβ96Updated 4 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation inβ¦β171Updated 4 months ago
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Modelsβ51Updated last week
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β44Updated 9 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Modelsβ153Updated 7 months ago
- [NeurIPS 2025] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chainsβ76Updated 6 months ago
- Cross-Self KV Cache Pruning for Efficient Vision-Language Inferenceβ10Updated last year
- [TMLR 2025] Efficient Reasoning Models: A Surveyβ298Updated last week
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"β54Updated 4 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Modelsβ163Updated 4 months ago