google-research-datasets / rico_semantics
Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations between selected general UI elements and their text labels. Annotations also include human annotated bounding boxes which are more accurate and have a greater coverage of UI elements.
☆22Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for rico_semantics
- ☆11Updated 5 months ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆45Updated 2 years ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆57Updated 2 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆64Updated 4 months ago
- VINS: Visual Search for Mobile User Interface Design☆30Updated 3 years ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆48Updated 3 years ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆42Updated 3 weeks ago
- ☆19Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆215Updated 2 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆78Updated 3 months ago
- ☆101Updated 11 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆46Updated 3 weeks ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆29Updated 4 years ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆34Updated 3 months ago
- Official github repo of G-LLaVA☆121Updated 5 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆128Updated 3 months ago
- ☆61Updated 2 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆196Updated 3 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆113Updated last week
- The Official Code Repository for GUI-World.☆36Updated 3 months ago
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆17Updated 3 years ago
- [EMNLP 2022] The baseline code for META-GUI dataset☆11Updated 4 months ago
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆175Updated 2 weeks ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆41Updated 3 years ago
- A curated mobile app design database☆53Updated 3 years ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆76Updated 9 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆48Updated 8 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆105Updated last month
- A curated list of resources about long-context in large-language models and video understanding.☆30Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆163Updated 2 months ago