ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆141Feb 7, 2025Updated last year
Alternatives and similar repositories for screen_qa
Users that are interested in screen_qa are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆86Mar 7, 2024Updated 2 years ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Aug 19, 2024Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆475Jul 13, 2025Updated 8 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆137Mar 1, 2026Updated 3 weeks ago
- ☆33Oct 1, 2024Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆100Oct 14, 2024Updated last year
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆47Aug 2, 2021Updated 4 years ago
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆382Feb 6, 2026Updated last month
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆23Jun 24, 2021Updated 4 years ago
- VisionDroid☆22Apr 2, 2024Updated last year
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆65Jul 27, 2021Updated 4 years ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆149Jan 3, 2026Updated 2 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆304Mar 11, 2026Updated last week
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆49Jun 17, 2022Updated 3 years ago
- ☆20Apr 24, 2024Updated last year
- (ICLR 2025) The Official Code Repository for GUI-World.☆68Dec 18, 2024Updated last year
- A mobile GUI search engine using a vision-language model☆14May 5, 2025Updated 10 months ago
- OCRVerse: Towards Holistic OCR in End-to-End Vision-Language Models☆29Feb 4, 2026Updated last month
- GUI Grounding for Professional High-Resolution Computer Use☆347Mar 4, 2026Updated 2 weeks ago
- Collection of Aesthetics Assessment Papers for Graphic Designs.☆35Aug 29, 2025Updated 6 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆60Dec 28, 2024Updated last year
- Web-grounded natural language instructions☆18Nov 25, 2024Updated last year
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆34Jun 27, 2024Updated last year
- ☆31Sep 27, 2024Updated last year
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large mult…☆834Feb 3, 2025Updated last year
- ☆680Jun 3, 2025Updated 9 months ago
- ☆17Jun 12, 2024Updated last year
- [NeurIPS 2025] UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents☆55Nov 27, 2025Updated 3 months ago
- ☆27Dec 29, 2023Updated 2 years ago
- OCR Annotations from Amazon Textract for Industry Documents Library☆103Aug 20, 2022Updated 3 years ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆442Apr 20, 2025Updated 11 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆66Nov 1, 2024Updated last year
- Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning☆252Sep 26, 2024Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆256Jul 16, 2024Updated last year
- On the Hidden Mystery of OCR in Large Multimodal Models (OCRBench)☆802Jul 5, 2025Updated 8 months ago
- ☆30Dec 27, 2024Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆164Sep 27, 2025Updated 5 months ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆34Aug 20, 2020Updated 5 years ago
- [SCIS] MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images☆44Nov 19, 2025Updated 4 months ago