bytedance / MTVQALinks
MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingual text perception and comprehension capabilities across nine widely-used yet low-resource languages.
☆62Updated 4 months ago
Alternatives and similar repositories for MTVQA
Users that are interested in MTVQA are comparing it to the libraries listed below
Sorting:
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 8 months ago
- Evaluation of the Optical Character Recognition (OCR) capabilities of GPT-4V(ision)☆125Updated last year
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆97Updated last year
- ☆67Updated last year
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆86Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 10 months ago
- ☆138Updated last year
- ☆44Updated last year
- A huge dataset for Document Visual Question Answering☆19Updated last year
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆95Updated 5 months ago
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆97Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆273Updated last year
- ☆74Updated last year
- ☆48Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆32Updated last year
- Code for ICCV 2023 Paper : “ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction”☆54Updated 2 years ago
- ☆87Updated last year
- ☆38Updated 11 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 11 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆125Updated 4 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆126Updated last year
- ☆92Updated 2 months ago
- ☆65Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆89Updated last year
- ☆57Updated last year
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆48Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- ☆80Updated last year