Vision-CAIR / dochaystacks
Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents, CVPR 2025
☆18Updated 3 months ago
Alternatives and similar repositories for dochaystacks
Users that are interested in dochaystacks are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆75Updated 6 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆56Updated 6 months ago
- ☆75Updated 4 months ago
- Synthetic data generation pipelines for text-rich images.☆67Updated 2 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 3 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆80Updated 10 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆66Updated 11 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆43Updated 2 months ago
- Official Code of IdealGPT☆35Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆84Updated 10 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆39Updated 7 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆83Updated last week
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆116Updated 5 months ago
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs☆11Updated last month
- ☆97Updated last month
- ☆99Updated last year
- ☆25Updated 9 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 10 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆54Updated last month
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆24Updated last year
- ☆63Updated last week
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆50Updated this week
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 6 months ago
- ☆48Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆78Updated 3 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆46Updated 5 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 5 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆58Updated last year