princeton-nlp / CharXivLinks
[NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
☆115Updated last month
Alternatives and similar repositories for CharXiv
Users that are interested in CharXiv are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 6 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆84Updated 11 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆107Updated 3 weeks ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆82Updated 10 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 2 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆98Updated last week
- A RLHF Infrastructure for Vision-Language Models☆176Updated 6 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 5 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆115Updated last month
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆73Updated 3 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆151Updated 8 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆53Updated 7 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- ☆77Updated 4 months ago
- ☆72Updated 9 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆158Updated 2 months ago
- ☆73Updated last year
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆65Updated this week
- ☆74Updated last year
- ☆102Updated last month
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆279Updated 8 months ago
- ☆64Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆120Updated last year
- ☆99Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆127Updated 2 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 7 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated 2 weeks ago
- Official repository of MMDU dataset☆91Updated 8 months ago