bytedance / MTVQALinks
MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingual text perception and comprehension capabilities across nine widely-used yet low-resource languages.
☆59Updated last month
Alternatives and similar repositories for MTVQA
Users that are interested in MTVQA are comparing it to the libraries listed below
Sorting:
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆86Updated 3 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆83Updated 11 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 5 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 7 months ago
- A huge dataset for Document Visual Question Answering☆18Updated 10 months ago
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆25Updated last year
- Evaluation of the Optical Character Recognition (OCR) capabilities of GPT-4V(ision)☆124Updated last year
- InstructionGPT-4☆39Updated last year
- ☆47Updated 9 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆91Updated 3 weeks ago
- ☆65Updated last year
- Code for ICCV 2023 Paper : “ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction”☆53Updated last year
- ☆34Updated 8 months ago
- ☆136Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 8 months ago
- ☆73Updated last year
- ☆41Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- ☆49Updated 4 months ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆39Updated 7 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆64Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 7 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆40Updated 9 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆85Updated last year
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆84Updated 9 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆60Updated last month
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated 10 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆263Updated last year
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆90Updated 2 months ago