Yxxxb / VoCo-LLaMALinks
[CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".
☆191Updated 4 months ago
Alternatives and similar repositories for VoCo-LLaMA
Users that are interested in VoCo-LLaMA are comparing it to the libraries listed below
Sorting:
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆150Updated 3 weeks ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated 2 weeks ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆179Updated last year
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆238Updated last month
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆132Updated 7 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆194Updated 3 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆353Updated 3 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆146Updated 11 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆129Updated 4 months ago
- ☆58Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆124Updated 2 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- ☆125Updated 7 months ago
- Official implement of MIA-DPO☆66Updated 9 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆133Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆152Updated last week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆118Updated last month
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆224Updated 3 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆134Updated this week
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆75Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆183Updated 5 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆69Updated last month
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆157Updated 10 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆85Updated last month