Yxxxb / VoCo-LLaMALinks
[CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".
☆200Updated 6 months ago
Alternatives and similar repositories for VoCo-LLaMA
Users that are interested in VoCo-LLaMA are comparing it to the libraries listed below
Sorting:
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆157Updated 2 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆171Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆133Updated 9 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆181Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆202Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆129Updated 4 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆200Updated last year
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆254Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆134Updated 6 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆148Updated last month
- ☆62Updated 7 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆126Updated 8 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆148Updated 3 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆229Updated last month
- ☆133Updated 8 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆90Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Updated last year
- Official implement of MIA-DPO☆67Updated 10 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆361Updated 4 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆132Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 11 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆58Updated 5 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆154Updated last year
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆97Updated last week
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆112Updated last year
- Official PyTorch Code of ReKV (ICLR'25)☆78Updated last month
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆101Updated 2 months ago