OpenGVLab / ChartAstLinks
[ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
☆123Updated 11 months ago
Alternatives and similar repositories for ChartAst
Users that are interested in ChartAst are comparing it to the libraries listed below
Sorting:
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆90Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- ☆79Updated 11 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆123Updated 3 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆111Updated 2 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆98Updated 7 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆79Updated 8 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 2 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆116Updated 8 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆158Updated 10 months ago
- ☆66Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆93Updated 2 months ago
- ☆54Updated last week
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 6 months ago
- ☆85Updated 7 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆53Updated 9 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆60Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆168Updated 4 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆287Updated 10 months ago
- A RLHF Infrastructure for Vision-Language Models☆179Updated 8 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆117Updated 2 weeks ago
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆52Updated 2 months ago
- [ACL'25 Main] ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation☆58Updated last week
- Official repository of MMDU dataset☆93Updated 10 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 9 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆134Updated 2 months ago