Yxxxb / VoCo-LLaMA
[CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".
☆155Updated 2 months ago
Alternatives and similar repositories for VoCo-LLaMA:
Users that are interested in VoCo-LLaMA are comparing it to the libraries listed below
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆92Updated 2 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆127Updated 11 months ago
- ☆82Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆164Updated 3 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆160Updated 6 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆154Updated 5 months ago
- ☆43Updated 3 months ago
- Official implement of MIA-DPO☆56Updated 3 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆175Updated 7 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆111Updated last month
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆57Updated last week
- ☆115Updated 9 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆75Updated 3 weeks ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆125Updated last year
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆71Updated 4 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆133Updated 5 months ago
- Official repository of MMDU dataset☆89Updated 7 months ago
- ☆133Updated 7 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆69Updated 7 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆188Updated last month
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆114Updated 2 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆95Updated 9 months ago
- The Next Step Forward in Multimodal LLM Alignment☆149Updated this week
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆154Updated 7 months ago
- ☆74Updated 3 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆61Updated last week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 3 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆106Updated last month
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆104Updated this week