[ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification".
☆71Sep 18, 2025Updated 5 months ago
Alternatives and similar repositories for dynamic_llava
Users that are interested in dynamic_llava are comparing it to the libraries listed below
Sorting:
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 7 months ago
- Implementation of NM sparsity recipe presented in the paper "Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers".☆11Feb 5, 2024Updated 2 years ago
- ☆24May 13, 2025Updated 9 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆55Oct 9, 2025Updated 4 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆106Jun 29, 2025Updated 8 months ago
- ☆27Nov 25, 2025Updated 3 months ago
- Code release for VTW (AAAI 2025 Oral)☆64Nov 4, 2025Updated 3 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- ☆66Jan 23, 2026Updated last month
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 9 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆22Oct 10, 2024Updated last year
- ☆22Mar 7, 2025Updated 11 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Nov 9, 2024Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆141Mar 6, 2025Updated 11 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 3 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 4 months ago
- ☆54May 19, 2025Updated 9 months ago
- Continuous Pipelined Speculative Decoding☆16Jan 4, 2026Updated last month
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆556Jan 4, 2025Updated last year
- ThinK: Thinner Key Cache by Query-Driven Pruning☆27Feb 11, 2025Updated last year
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆65Dec 1, 2025Updated 3 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆86Oct 26, 2025Updated 4 months ago
- ☆35Mar 17, 2025Updated 11 months ago
- [NeurIPS 2025] This is the official repository for "RAD: Towards Trustworthy Retrieval-Augmented Multi-modal Clinical Diagnosis"☆26Nov 21, 2025Updated 3 months ago
- ☆14Jan 24, 2025Updated last year
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 3 months ago
- ☆60Jan 12, 2026Updated last month
- ☆28May 24, 2025Updated 9 months ago
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆55Feb 1, 2026Updated last month
- Code for paper: Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference☆13Jun 7, 2025Updated 8 months ago
- Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding.☆13Nov 19, 2024Updated last year
- An official repository for GPTailor☆17Jun 29, 2025Updated 8 months ago
- The official implement of "Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings"☆18Dec 5, 2024Updated last year
- This is the open-source code for TokenCarve.☆23Jan 23, 2026Updated last month
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆164Sep 27, 2025Updated 5 months ago
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 4 months ago
- ☆15Apr 11, 2024Updated last year