apple / ml-mia-benchLinks
This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs
☆31Updated 6 months ago
Alternatives and similar repositories for ml-mia-bench
Users that are interested in ml-mia-bench are comparing it to the libraries listed below
Sorting:
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 5 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 7 months ago
- ☆50Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated 11 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Multimodal RewardBench☆46Updated 7 months ago
- Official repo for StableLLAVA☆94Updated last year
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 6 months ago
- Preference Learning for LLaVA☆49Updated 10 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆77Updated 2 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆28Updated 3 weeks ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 11 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 9 months ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆48Updated 4 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆37Updated 5 months ago
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆41Updated last month
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 10 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆74Updated last year
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆48Updated 3 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆108Updated 3 months ago
- ☆55Updated 3 months ago
- Unsupervised GRPO☆47Updated 3 months ago
- FocusLLM: Scaling LLM’s Context by Parallel Decoding☆43Updated 9 months ago
- ☆16Updated 11 months ago
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆36Updated 8 months ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆46Updated last year