opendatalab / CHARM
[ACL 2024 Main Conference] Chinese commonsense benchmark for LLMs
☆28Updated 5 months ago
Alternatives and similar repositories for CHARM:
Users that are interested in CHARM are comparing it to the libraries listed below
- AAAI 2024: Visual Instruction Generation and Correction☆91Updated 11 months ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆103Updated 6 months ago
- A Self-Training Framework for Vision-Language Reasoning☆60Updated 2 months ago
- This is the repo for the paper Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.☆39Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆44Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆40Updated 6 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated last month
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆337Updated 9 months ago
- ☆17Updated 10 months ago
- ☆24Updated 8 months ago
- ☆94Updated last year
- ☆21Updated last month
- Official repository of MMDU dataset☆80Updated 3 months ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆78Updated 3 months ago
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆21Updated 10 months ago
- ☆13Updated last year
- ☆33Updated 6 months ago
- A RLHF Infrastructure for Vision-Language Models☆145Updated 2 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆160Updated 3 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆54Updated last week
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆89Updated last week
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆251Updated 6 months ago
- ☆57Updated 7 months ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆44Updated 10 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆90Updated 2 months ago
- ✨✨ MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆88Updated last month
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆173Updated 4 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆38Updated last month
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆128Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆257Updated 4 months ago