google-research-datasets / clayLinks
The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android app screenshots. It is used for training and evaluation of the screen layout denoising models (paper will be linked soon).
☆53Updated 3 years ago
Alternatives and similar repositories for clay
Users that are interested in clay are comparing it to the libraries listed below
Sorting:
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆47Updated 3 years ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆32Updated last year
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆46Updated 4 years ago
- A curated mobile app design database☆65Updated 4 years ago
- ☆125Updated 2 years ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆60Updated 4 years ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 5 years ago
- VINS: Visual Search for Mobile User Interface Design☆48Updated 4 years ago
- Screen2Vec is a new self-supervised technique for generating more comprehensive semantic embeddings of GUI screens and components using t…☆80Updated 10 months ago
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆134Updated 10 months ago
- ☆14Updated last year
- UICrit is a dataset containing human-generated natural language design critiques, corresponding bounding boxes for each critique, and des…☆24Updated last year
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆81Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆256Updated last year
- Object Detection for Graphical User Interface: Old Fashioned or Deep Learning or a Combination?☆128Updated last year
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆23Updated 4 years ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated last year
- ☆36Updated 3 years ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆137Updated 4 months ago
- The model, data and code for the visual GUI Agent SeeClick☆447Updated 5 months ago
- ☆83Updated last year
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆82Updated 2 years ago
- A curated list of recent and past chart understanding work based on our IEEE TKDE survey paper: From Pixels to Insights: A Survey on Auto…☆229Updated this week
- ☆231Updated 8 months ago
- An accurate GUI element detection approach based on old-fashioned CV algorithms [Upgraded on 5/July/2021]☆512Updated 2 years ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆66Updated last year
- ☆44Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆134Updated last year
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆15Updated 5 years ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆97Updated last year