opendatalab / CHARMLinks
[ACL 2024 Main Conference] Chinese commonsense benchmark for LLMs
☆36Updated 10 months ago
Alternatives and similar repositories for CHARM
Users that are interested in CHARM are comparing it to the libraries listed below
Sorting:
- AAAI 2024: Visual Instruction Generation and Correction☆93Updated last year
- ☆74Updated last year
- Official repository of MMDU dataset☆91Updated 8 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆63Updated 2 months ago
- A RLHF Infrastructure for Vision-Language Models☆176Updated 6 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆98Updated last week
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆126Updated 11 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆122Updated 3 months ago
- This is the repo for the paper Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.☆42Updated 6 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆127Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆47Updated last year
- R1-Vision: Let's first take a look at the image☆47Updated 3 months ago
- ☆81Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 6 months ago
- ☆14Updated last year
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆306Updated 3 weeks ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆41Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ☆26Updated 7 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- ☆45Updated last month
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆108Updated last month
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆115Updated last month
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆61Updated 2 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆279Updated 8 months ago
- ☆77Updated 4 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆107Updated 3 weeks ago
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆12Updated 9 months ago