google-research-datasets / widget-captionLinks
The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget captioning model (please see the EMNLP'20 paper: https://arxiv.org/abs/2010.04295).
☆23Updated 4 years ago
Alternatives and similar repositories for widget-caption
Users that are interested in widget-caption are comparing it to the libraries listed below
Sorting:
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆58Updated 4 years ago
- ☆122Updated last year
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆126Updated 6 months ago
- Object Detection for Graphical User Interface: Old Fashioned or Deep Learning or a Combination?☆127Updated last year
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆52Updated 3 years ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆247Updated last year
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 5 years ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆30Updated last year
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆45Updated 4 years ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆73Updated last year
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆45Updated 3 years ago
- The model, data and code for the visual GUI Agent SeeClick☆417Updated last month
- [WSDM 2024] Hierarchical Multimodal Pre-training for Visually Rich Webpage Understanding☆15Updated last year
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆93Updated 5 months ago
- VisualWebArena is a benchmark for multimodal agents.☆370Updated 9 months ago
- ☆218Updated 4 months ago
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆79Updated 2 years ago
- ☆66Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆90Updated 10 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆127Updated last year
- A curated mobile app design database☆62Updated 3 years ago
- ☆115Updated last year
- ☆80Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆249Updated last year
- ☆140Updated last year
- [EMNLP 2022] The baseline code for META-GUI dataset☆14Updated last year
- A curated list of recent and past chart understanding work based on our IEEE TKDE survey paper: From Pixels to Insights: A Survey on Auto…☆216Updated 2 months ago
- Chart-to-Text: Generating Natural Language Explanations for Charts by Adapting the Transformer Model☆156Updated 2 years ago