google-research-datasets / rico_semanticsLinks
Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations between selected general UI elements and their text labels. Annotations also include human annotated bounding boxes which are more accurate and have a greater coverage of UI elements.
☆32Updated last year
Alternatives and similar repositories for rico_semantics
Users that are interested in rico_semantics are comparing it to the libraries listed below
Sorting:
- ☆14Updated last year
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆137Updated 4 months ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆53Updated 3 years ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆95Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆447Updated 5 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆134Updated last year
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆47Updated 3 years ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆256Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆134Updated 10 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆57Updated 2 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated last year
- ☆35Updated last year
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆60Updated 4 years ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆66Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Updated last year
- ☆31Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆358Updated 2 years ago
- ☆125Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆356Updated 11 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆81Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆97Updated last year
- VINS: Visual Search for Mobile User Interface Design☆48Updated 4 years ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆236Updated 8 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆275Updated 6 months ago
- A curated mobile app design database☆65Updated 4 years ago
- ☆83Updated last year
- ☆12Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆174Updated last year
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆61Updated 2 weeks ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆112Updated 2 months ago