OpenGVLab / ChartAstLinks
[ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
☆123Updated 11 months ago
Alternatives and similar repositories for ChartAst
Users that are interested in ChartAst are comparing it to the libraries listed below
Sorting:
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆97Updated 7 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆96Updated last year
- ☆80Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆114Updated 3 months ago
- ☆65Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆124Updated 4 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆288Updated 11 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆96Updated 3 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆161Updated 10 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆81Updated 9 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 3 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 10 months ago
- Official repository of MMDU dataset☆93Updated 11 months ago
- A RLHF Infrastructure for Vision-Language Models☆182Updated 9 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆53Updated 10 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆271Updated last year
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆54Updated 3 weeks ago
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆121Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆82Updated 7 months ago
- ☆73Updated last year
- Paper collections of multi-modal LLM for Math/STEM/Code.☆123Updated 2 weeks ago
- The code and data of We-Math, accepted by ACL 2025 main conference.☆135Updated last week
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆113Updated last year
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated last year
- ☆100Updated last year
- ☆86Updated 7 months ago