huggingface / docmatix
A huge dataset for Document Visual Question Answering
☆13Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for docmatix
- ☆45Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆32Updated 5 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 5 months ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated last year
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆45Updated last month
- ☆29Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆21Updated 8 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆78Updated 10 months ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆50Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆55Updated last month
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- ☆35Updated 3 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆57Updated 4 months ago
- ☆87Updated 10 months ago
- ☆19Updated 11 months ago
- ☆21Updated 3 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆22Updated 4 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆31Updated 3 months ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆31Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆55Updated last month
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 2 years ago
- ☆36Updated last year
- PyTorch implementation of "UNIT: Unifying Image and Text Recognition in One Vision Encoder", NeurlPS 2024.☆20Updated last month
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆14Updated last month
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆132Updated last year
- ☆84Updated 10 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆102Updated last month
- M4 experiment logbook☆56Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆84Updated last year