google-research-datasets / clay
The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android app screenshots. It is used for training and evaluation of the screen layout denoising models (paper will be linked soon).
☆49Updated 3 years ago
Alternatives and similar repositories for clay:
Users that are interested in clay are comparing it to the libraries listed below
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆28Updated 8 months ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆41Updated 3 years ago
- A curated mobile app design database☆60Updated 3 years ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆54Updated 3 years ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 4 years ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆40Updated 2 years ago
- Screen2Vec is a new self-supervised technique for generating more comprehensive semantic embeddings of GUI screens and components using t…☆69Updated last month
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆20Updated 3 years ago
- VINS: Visual Search for Mobile User Interface Design☆36Updated 4 years ago
- ☆13Updated 10 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated 7 months ago
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆13Updated 4 years ago
- ☆111Updated last year
- Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning☆48Updated last year
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆63Updated last year
- Object Detection for Graphical User Interface: Old Fashioned or Deep Learning or a Combination?☆127Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆107Updated last month
- (ICLR 2025) The Official Code Repository for GUI-World.☆52Updated 3 months ago
- JSON Processing of RICO Dataset☆15Updated 2 years ago
- Learning UI Similarity using Graph Networks☆37Updated 4 years ago
- ☆34Updated 2 years ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated 8 months ago
- [EMNLP 2022] The baseline code for META-GUI dataset☆12Updated 8 months ago
- ☆69Updated 7 months ago
- Swire Dataset and Application Code☆17Updated 6 years ago
- Figma Files Scraper for Research & Studies☆22Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆223Updated 8 months ago
- PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention" to appear in ICCV 2021☆157Updated 3 years ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆103Updated 8 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆77Updated 5 months ago