arctanxarc / MC-LLaVALinks
Official implementation of MC-LLaVA.
β31Updated last month
Alternatives and similar repositories for MC-LLaVA
Users that are interested in MC-LLaVA are comparing it to the libraries listed below
Sorting:
- π Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ24Updated last month
- β88Updated 3 months ago
- π Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Modelsβ30Updated last month
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ83Updated last month
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"β22Updated 2 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Modelsβ41Updated 3 months ago
- [ICLR2025] Ξ³ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Modelsβ37Updated 5 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β39Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Modelsβ43Updated last month
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuningβ50Updated last month
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Groundingβ64Updated last month
- β15Updated 2 months ago
- Official implement of MIA-DPOβ59Updated 5 months ago
- Fast-Slow Thinking for Large Vision-Language Model Reasoningβ16Updated 2 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ30Updated 6 months ago
- [CVPR2025] BOLT: Boost Large Vision-Language Model Without Training for Long-form Video Understandingβ23Updated 3 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learningβ29Updated 2 weeks ago
- [Arxiv Paper 2504.09130]: VisuoThink: Empowering LVLM Reasoning with Multimodal Tree Searchβ20Updated 2 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignmentβ58Updated 9 months ago
- β53Updated 2 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"β31Updated 2 weeks ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generationβ101Updated last month
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"β29Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ80Updated 2 months ago
- [CVPR 2025] DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text Retrievalβ17Updated 3 weeks ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β82Updated 2 weeks ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β184Updated this week
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMsβ38Updated 3 weeks ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ34Updated 4 months ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videosβ58Updated this week