zhiqic / ChartReaderLinks
[ICCV 2023] ChartReader: A Unified Framework for Chart Derendering and Comprehension without Heuristic Rules
☆23Updated last year
Alternatives and similar repositories for ChartReader
Users that are interested in ChartReader are comparing it to the libraries listed below
Sorting:
- Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models, EMNLP 2023☆46Updated last year
- The WordScape repository contains code for the WordScape pipeline to create datasets to train document understanding models.☆37Updated last year
- ☆66Updated last year
- Dataset and scripts for HRDoc☆39Updated 2 years ago
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆79Updated 2 years ago
- ☆80Updated 11 months ago
- ☆43Updated last year
- ☆140Updated 2 years ago
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆92Updated 4 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated last year
- Text-DIAE: A Self-Supervised Degradation Invariant Autoencoders for Text Recognition and Document Enhancement - AAAI 2023☆26Updated 2 years ago
- ☆32Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆263Updated 7 months ago
- InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions (AAAI2024)☆161Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆92Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆63Updated 3 months ago
- ☆26Updated last year
- ☆115Updated last year
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆98Updated 7 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆123Updated 11 months ago
- ☆24Updated 3 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆206Updated 11 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- ☆86Updated last year
- Language Quantized AutoEncoders☆108Updated 2 years ago
- My implementation of Kosmos2.5 from the paper: "KOSMOS-2.5: A Multimodal Literate Model"☆73Updated 3 weeks ago
- Evaluation of the Optical Character Recognition (OCR) capabilities of GPT-4V(ision)☆125Updated last year
- [ACL'25 Main] ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation☆58Updated last week
- ☆215Updated 3 months ago
- ☆30Updated last year