google-research-datasets / screen2wordsLinks
The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of the screen2words models (our paper accepted by UIST'21 will be linked soon).
☆57Updated 3 years ago
Alternatives and similar repositories for screen2words
Users that are interested in screen2words are comparing it to the libraries listed below
Sorting:
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆28Updated 11 months ago
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆117Updated 3 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆60Updated 9 months ago
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆21Updated 3 years ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆52Updated 3 years ago
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 4 years ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆238Updated 10 months ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆43Updated 3 years ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆42Updated 2 years ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆71Updated last year
- [EMNLP 2022] The baseline code for META-GUI dataset☆13Updated 10 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆57Updated 5 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆114Updated 6 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆45Updated 3 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆116Updated 10 months ago
- ☆116Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆379Updated 6 months ago
- ☆35Updated 11 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆87Updated last year
- ☆57Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated 10 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆239Updated last week
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆86Updated 7 months ago
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆14Updated 4 years ago
- Screen2Vec is a new self-supervised technique for generating more comprehensive semantic embeddings of GUI screens and components using t…☆72Updated 4 months ago
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Updated 10 months ago
- ☆13Updated last year
- Evaluating tool-augmented LLMs in conversation settings☆84Updated last year
- VisualWebArena is a benchmark for multimodal agents.☆347Updated 6 months ago
- ☆29Updated 8 months ago