OpenDFM / MULTI-Benchmark
MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
☆32Updated last week
Alternatives and similar repositories for MULTI-Benchmark:
Users that are interested in MULTI-Benchmark are comparing it to the libraries listed below
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆33Updated 3 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆43Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 3 weeks ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆45Updated 7 months ago
- ☆35Updated last month
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆54Updated 6 months ago
- MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆32Updated 2 months ago
- ☆47Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆29Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆64Updated last month
- ☆63Updated last month
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆39Updated 2 weeks ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- [AAAI 2025]Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical Reasoning☆27Updated 4 months ago
- ☆28Updated 4 months ago
- Open-Pandora: On-the-fly Control Video Generation☆32Updated 2 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆26Updated 7 months ago
- LMM solved catastrophic forgetting, AAAI2025☆38Updated 3 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆31Updated last week
- The official repository for the paper "Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark"☆42Updated 3 weeks ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆37Updated 4 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆63Updated 8 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆90Updated last month
- ☆26Updated 6 months ago
- ☆93Updated 7 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆49Updated 4 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆70Updated 2 weeks ago