OpenGVLab / MMIU
[ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
☆61Updated 5 months ago
Alternatives and similar repositories for MMIU:
Users that are interested in MMIU are comparing it to the libraries listed below
- Official implement of MIA-DPO☆49Updated 3 weeks ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆36Updated 11 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 4 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆44Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆37Updated 4 months ago
- ☆61Updated last month
- Empowering Unified MLLM with Multi-granular Visual Generation☆117Updated last month
- Official repository of MMDU dataset☆83Updated 4 months ago
- ☆50Updated 6 months ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆95Updated 7 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆62Updated 6 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 3 weeks ago
- Official repo for StableLLAVA☆94Updated last year
- VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆51Updated last month
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆53Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆127Updated 3 months ago
- ☆44Updated 9 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆63Updated 8 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆44Updated 3 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 8 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 3 weeks ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆26Updated 7 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆29Updated 2 months ago
- ☆26Updated 6 months ago
- Training code for CLIP-FlanT5☆24Updated 6 months ago
- Preference Learning for LLaVA☆37Updated 3 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆67Updated 4 months ago