Beckschen / LLaVoltaLinks
[NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression
☆64Updated 10 months ago
Alternatives and similar repositories for LLaVolta
Users that are interested in LLaVolta are comparing it to the libraries listed below
Sorting:
- ☆96Updated 6 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆137Updated 7 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆77Updated last year
- Official implement of MIA-DPO☆70Updated 11 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 10 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆133Updated 9 months ago
- ☆139Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆203Updated 6 months ago
- ☆46Updated last year
- ☆80Updated 6 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆92Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated last year
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆85Updated 6 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆131Updated 5 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆42Updated 3 weeks ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆157Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- ☆100Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆155Updated 3 months ago
- [NeurIPS2024] Official code for (IMA) Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs☆23Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆136Updated 2 years ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆127Updated 9 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated last year
- ☆124Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆169Updated last year
- Official repo for StableLLAVA☆95Updated 2 years ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆59Updated last year