google-research-datasets / screen_annotation
The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and describe the UI elements present on the screen: their type, location, OCR text and a short description. It has been introduced in the paper `ScreenAI: A Vision-Language Model for UI and Infographics Understanding`.
☆46Updated 6 months ago
Related projects: ⓘ
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆87Updated 2 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆122Updated 2 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆68Updated 2 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆177Updated 2 months ago
- The model, data and code for the visual GUI Agent SeeClick☆182Updated 3 weeks ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆44Updated 3 years ago
- ☆90Updated 9 months ago
- ☆49Updated 8 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆184Updated 2 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆55Updated last month
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆19Updated 2 months ago
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆45Updated 6 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆57Updated 2 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆53Updated last month
- ☆14Updated last week
- VisualWebArena is a benchmark for multimodal agents.☆211Updated last month
- E5-V: Universal Embeddings with Multimodal Large Language Models☆148Updated 2 months ago
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆85Updated 5 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆87Updated 3 weeks ago
- ☆34Updated last month
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆48Updated 3 weeks ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆31Updated last month
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆29Updated 4 years ago
- a family of highly capabale yet efficient large multimodal models☆155Updated 3 weeks ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆55Updated last week
- Code for the paper 🌳 Tree Search for Language Model Agents☆124Updated last month
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆44Updated 2 years ago
- Benchmarks, environments, and toolkits for general computer agents☆154Updated this week
- ☆121Updated 10 months ago
- The Official Code Repository for GUI-World.☆33Updated last month