NiteshMethani / PlotQALinks
Dataset introduced in PlotQA: Reasoning over Scientific Plots
☆80Updated 2 years ago
Alternatives and similar repositories for PlotQA
Users that are interested in PlotQA are comparing it to the libraries listed below
Sorting:
- ☆82Updated last year
- ☆67Updated last year
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆100Updated 7 months ago
- ☆118Updated last year
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 10 months ago
- ☆226Updated 7 months ago
- SciCap Dataset☆56Updated 4 years ago
- ☆11Updated 2 years ago
- Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models, EMNLP 2023☆46Updated last year
- A huge dataset for Document Visual Question Answering☆20Updated last year
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆105Updated last month
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆64Updated 6 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆133Updated 2 years ago
- ☆40Updated 2 years ago
- ☆45Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆129Updated 6 months ago
- [ICPRAI 2024] DocumentCLIP: Linking Figures and Main Body Text in Reflowed Documents☆16Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆167Updated last year
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆131Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆91Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated 10 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆209Updated last year
- Reading list for Multimodal Large Language Models☆69Updated 2 years ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆166Updated 2 years ago
- ☆50Updated 2 years ago
- Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"☆27Updated last year
- The WordScape repository contains code for the WordScape pipeline to create datasets to train document understanding models.☆37Updated last year