google-research-datasets / rico_semanticsLinks
Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations between selected general UI elements and their text labels. Annotations also include human annotated bounding boxes which are more accurate and have a greater coverage of UI elements.
☆32Updated last year
Alternatives and similar repositories for rico_semantics
Users that are interested in rico_semantics are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆147Updated 3 weeks ago
- The model, data and code for the visual GUI Agent SeeClick☆461Updated 6 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Updated last year
- ☆15Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆97Updated last year
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated last year
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆62Updated 4 years ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆136Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆139Updated 11 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Updated last year
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆53Updated 4 years ago
- ☆35Updated last year
- ☆31Updated last year
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆49Updated 3 years ago
- ☆127Updated 2 years ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆60Updated 4 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆294Updated 6 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆68Updated last year
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆33Updated 5 years ago
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆23Updated 4 years ago
- ☆12Updated last year
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆84Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆320Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆254Updated 9 months ago
- ☆36Updated 3 years ago
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆64Updated last month
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆63Updated last year
- A curated mobile app design database☆66Updated 4 years ago
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆142Updated 2 months ago