google-research-datasets / rico_semanticsLinks
Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations between selected general UI elements and their text labels. Annotations also include human annotated bounding boxes which are more accurate and have a greater coverage of UI elements.
☆30Updated last year
Alternatives and similar repositories for rico_semantics
Users that are interested in rico_semantics are comparing it to the libraries listed below
Sorting:
- ☆13Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆90Updated 10 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆128Updated 3 weeks ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆127Updated last year
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆417Updated last month
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆52Updated 3 years ago
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆126Updated 6 months ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆58Updated 4 years ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated last year
- [EMNLP 2022] The baseline code for META-GUI dataset☆14Updated last year
- VINS: Visual Search for Mobile User Interface Design☆44Updated 4 years ago
- ☆36Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆247Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆52Updated last month
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 5 years ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆45Updated 3 years ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆73Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆346Updated 7 months ago
- ☆31Updated 11 months ago
- Official github repo of G-LLaVA☆146Updated 6 months ago
- ☆80Updated last year
- (ICLR 2025) The Official Code Repository for GUI-World.☆65Updated 8 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆307Updated 7 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Under construction☆11Updated 7 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆272Updated last month
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆89Updated 10 months ago
- ☆11Updated last year
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆59Updated 3 months ago