google-research-datasets / rico_semantics
Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations between selected general UI elements and their text labels. Annotations also include human annotated bounding boxes which are more accurate and have a greater coverage of UI elements.
☆28Updated 8 months ago
Alternatives and similar repositories for rico_semantics:
Users that are interested in rico_semantics are comparing it to the libraries listed below
- ☆13Updated 10 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆93Updated 4 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆77Updated 5 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated 7 months ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆54Updated 3 years ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆42Updated 3 weeks ago
- VINS: Visual Search for Mobile User Interface Design☆36Updated 4 years ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆52Updated 3 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆103Updated 8 months ago
- ☆28Updated 5 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆63Updated last year
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆49Updated 3 years ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆50Updated 5 months ago
- The model, data and code for the visual GUI Agent SeeClick☆336Updated 3 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated 8 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆223Updated 8 months ago
- ☆34Updated 2 years ago
- Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆114Updated 2 weeks ago
- ☆12Updated 7 months ago
- ☆26Updated 5 months ago
- ☆32Updated 9 months ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆40Updated 2 years ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 4 years ago
- Towards Large Multimodal Models as Visual Foundation Agents☆192Updated last month
- ☆111Updated last year
- ☆17Updated 10 months ago
- Screen2Vec is a new self-supervised technique for generating more comprehensive semantic embeddings of GUI screens and components using t…☆69Updated last month
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated 3 weeks ago
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆13Updated 4 years ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆75Updated 4 months ago