google-research-datasets / screen_qaLinks
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆128Updated 7 months ago
Alternatives and similar repositories for screen_qa
Users that are interested in screen_qa are comparing it to the libraries listed below
Sorting:
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆76Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆249Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆127Updated last year
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆276Updated 2 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆128Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆422Updated 2 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆91Updated 11 months ago
- GUI Grounding for Professional High-Resolution Computer Use☆252Updated this week
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆308Updated 7 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- a family of highly capabale yet efficient large multimodal models☆190Updated last year
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆97Updated last year
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆379Updated 4 months ago
- ☆31Updated 11 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆125Updated 4 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆491Updated 3 months ago
- VisualWebArena is a benchmark for multimodal agents.☆374Updated 10 months ago
- ☆236Updated 3 weeks ago
- A curated list of recent and past chart understanding work based on our IEEE TKDE survey paper: From Pixels to Insights: A Survey on Auto…☆219Updated 3 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 8 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆97Updated 3 months ago
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆55Updated 3 weeks ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆356Updated 6 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆90Updated 10 months ago
- ☆74Updated last year
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆116Updated 4 months ago
- ☆141Updated last year
- ☆36Updated last year