google-research-datasets / screen2wordsLinks
The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of the screen2words models (our paper accepted by UIST'21 will be linked soon).
☆58Updated 4 years ago
Alternatives and similar repositories for screen2words
Users that are interested in screen2words are comparing it to the libraries listed below
Sorting:
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆23Updated 4 years ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆250Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆128Updated 7 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated last year
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆32Updated 5 years ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆53Updated 3 years ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆46Updated 4 years ago
- The model, data and code for the visual GUI Agent SeeClick☆426Updated 2 months ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆30Updated last year
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆47Updated 3 years ago
- ☆123Updated last year
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆76Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆128Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated last year
- VisualWebArena is a benchmark for multimodal agents.☆379Updated 10 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆65Updated 9 months ago
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆366Updated last week
- A curated mobile app design database☆63Updated 3 years ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆128Updated last month
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆277Updated 2 months ago
- [EMNLP 2022] The baseline code for META-GUI dataset☆14Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆55Updated last week
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆91Updated 11 months ago
- Langchain implementation of HuggingGPT☆133Updated 2 years ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆263Updated last year
- ☆135Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- [NeurIPS'23 Spotlight] "Mind2Web: Towards a Generalist Agent for the Web" -- the first LLM-based web agent and benchmark for generalist w…☆875Updated 5 months ago
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆249Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆237Updated 4 months ago