opendatalab / CHARM
[ACL 2024 Main Conference] Chinese commonsense benchmark for LLMs
☆31Updated 8 months ago
Alternatives and similar repositories for CHARM:
Users that are interested in CHARM are comparing it to the libraries listed below
- AAAI 2024: Visual Instruction Generation and Correction☆92Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆47Updated 2 weeks ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆119Updated 9 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 4 months ago
- R1-Vision: Let's first take a look at the image☆41Updated last month
- ☆64Updated 9 months ago
- A RLHF Infrastructure for Vision-Language Models☆170Updated 4 months ago
- A Self-Training Framework for Vision-Language Reasoning☆73Updated 2 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆47Updated last year
- Official repository of MMDU dataset☆86Updated 6 months ago
- ☆25Updated 10 months ago
- The code repository for "Wings: Learning Multimodal LLMs without Text-only Forgetting" [NeurIPS 2024]☆16Updated 3 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆49Updated 8 months ago
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆21Updated last month
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆92Updated this week
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆97Updated last month
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆46Updated last month
- This is the repo for the paper Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.☆39Updated 3 months ago
- [NeurIPS2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆94Updated 3 weeks ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆33Updated 7 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆94Updated 3 weeks ago
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆9Updated 7 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆49Updated last week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆81Updated 9 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆104Updated 5 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆148Updated 2 weeks ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆101Updated 3 weeks ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆55Updated last year
- ☆14Updated last year