SUSTechBruce / LOOK-MLinks
[EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference"
☆100Updated 10 months ago
Alternatives and similar repositories for LOOK-M
Users that are interested in LOOK-M are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆51Updated 9 months ago
- Code release for VTW (AAAI 2025 Oral)☆50Updated last month
- ☆55Updated 4 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆144Updated 2 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆70Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆85Updated 7 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆87Updated last month
- A Self-Training Framework for Vision-Language Reasoning☆83Updated 7 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆87Updated 9 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆128Updated last month
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆122Updated 6 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆35Updated last year
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆73Updated last week
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆90Updated 2 months ago
- [EMNLP 2025 main] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆72Updated 2 weeks ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 9 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆40Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆106Updated 3 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆143Updated 3 months ago
- [arXiv2505] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆50Updated last month
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehens…☆73Updated 4 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆114Updated 2 months ago
- ☆104Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆163Updated 5 months ago
- ☆88Updated 8 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- Survey: https://arxiv.org/pdf/2507.20198☆133Updated this week
- A RLHF Infrastructure for Vision-Language Models☆183Updated 10 months ago