princeton-nlp / CharXivLinks
[NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
☆124Updated 4 months ago
Alternatives and similar repositories for CharXiv
Users that are interested in CharXiv are comparing it to the libraries listed below
Sorting:
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆114Updated 3 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 3 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆120Updated last week
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆294Updated 2 weeks ago
- A RLHF Infrastructure for Vision-Language Models☆181Updated 9 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 10 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆140Updated 2 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆168Updated 5 months ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆112Updated 2 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆98Updated 7 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆53Updated 9 months ago
- A Self-Training Framework for Vision-Language Reasoning☆82Updated 7 months ago
- ☆100Updated last year
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆94Updated last year
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆150Updated last month
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆161Updated 10 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆123Updated 11 months ago
- ☆86Updated 7 months ago
- ☆80Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 7 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆126Updated 2 months ago
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆101Updated 6 months ago
- ☆80Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 9 months ago
- The code and data of We-Math, accepted by ACL 2025 main conference.☆135Updated this week
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆40Updated 6 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Updated last year