OpenDFM / MULTI-BenchmarkLinks
MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
☆38Updated last month
Alternatives and similar repositories for MULTI-Benchmark
Users that are interested in MULTI-Benchmark are comparing it to the libraries listed below
Sorting:
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated 11 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ☆77Updated 4 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆34Updated 10 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- ☆51Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆73Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆104Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- [ICLR'25] Geometric Problem Solving Through Unified Formalized Vision-Language Pre-training☆37Updated 4 months ago
- ☆29Updated 8 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 6 months ago
- ☆64Updated last year
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆114Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆64Updated 2 weeks ago
- Official repository of MMDU dataset☆91Updated 8 months ago
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆107Updated last month
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago
- ☆43Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated 2 weeks ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 6 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆106Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆86Updated 7 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago
- ☆46Updated last month
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 7 months ago