google-research-datasets / widget-caption
The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget captioning model (please see the EMNLP'20 paper: https://arxiv.org/abs/2010.04295).
☆21Updated 3 years ago
Alternatives and similar repositories for widget-caption:
Users that are interested in widget-caption are comparing it to the libraries listed below
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆56Updated 3 years ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated 8 months ago
- ☆116Updated last year
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆50Updated 3 years ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆27Updated 10 months ago
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆114Updated 2 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆70Updated last year
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 4 years ago
- Object Detection for Graphical User Interface: Old Fashioned or Deep Learning or a Combination?☆127Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆233Updated 9 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆113Updated 9 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆85Updated 6 months ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆42Updated 3 years ago
- ☆35Updated 2 years ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆40Updated 2 years ago
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆88Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆365Updated 5 months ago
- ☆13Updated 11 months ago
- ☆63Updated last year
- VINS: Visual Search for Mobile User Interface Design☆37Updated 4 years ago
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆76Updated last year
- ☆113Updated 9 months ago
- A curated mobile app design database☆60Updated 3 years ago
- ☆132Updated last year
- ☆193Updated 2 weeks ago
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆89Updated last year
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆202Updated 8 months ago
- [EMNLP 2022] The baseline code for META-GUI dataset☆13Updated 9 months ago
- VisualWebArena is a benchmark for multimodal agents.☆334Updated 5 months ago
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆67Updated last year