hewei2001 / ReachQA
Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"
☆50Updated 3 months ago
Alternatives and similar repositories for ReachQA:
Users that are interested in ReachQA are comparing it to the libraries listed below
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆77Updated 7 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆96Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆89Updated last month
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆85Updated 4 months ago
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆53Updated 9 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆141Updated last month
- Large Language Models Can Self-Improve in Long-context Reasoning☆62Updated 2 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆57Updated 3 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 3 weeks ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆111Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- ☆60Updated 8 months ago
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆38Updated 2 months ago
- MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models☆23Updated 3 weeks ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆29Updated 7 months ago
- ☆73Updated 11 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆63Updated 8 months ago
- ☆58Updated last month
- A Self-Training Framework for Vision-Language Reasoning☆63Updated 3 weeks ago
- ☆58Updated 5 months ago
- Reformatted Alignment☆114Updated 4 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆64Updated 2 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆49Updated 4 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆57Updated 9 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆46Updated 2 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆74Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆67Updated 7 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆126Updated 4 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 4 months ago