google-research-datasets / screen_annotationLinks
The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and describe the UI elements present on the screen: their type, location, OCR text and a short description. It has been introduced in the paper `ScreenAI: A Vision-Language Model for UI and Infographics Understanding`.
☆73Updated last year
Alternatives and similar repositories for screen_annotation
Users that are interested in screen_annotation are comparing it to the libraries listed below
Sorting:
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆124Updated 6 months ago
- GUI Grounding for Professional High-Resolution Computer Use☆248Updated 3 weeks ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆127Updated last year
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆269Updated last month
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆352Updated 5 months ago
- The model, data and code for the visual GUI Agent SeeClick☆417Updated last month
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆247Updated last year
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆126Updated 3 weeks ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆369Updated 4 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated last year
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆155Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆90Updated 10 months ago
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆372Updated 6 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆89Updated 10 months ago
- WebLINX is a benchmark for building web navigation agents with conversational capabilities☆157Updated 6 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆52Updated last month
- Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆126Updated 3 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆230Updated 4 months ago
- ☆235Updated last week
- (ICLR 2025) The Official Code Repository for GUI-World.☆65Updated 8 months ago
- VisualWebArena is a benchmark for multimodal agents.☆368Updated 9 months ago
- [ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents☆215Updated 2 months ago
- ☆31Updated 11 months ago
- GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents☆321Updated 3 weeks ago
- Scaling Computer-Use Grounding via UI Decomposition and Synthesis☆102Updated 2 months ago
- ☆89Updated last month
- Code for the paper 🌳 Tree Search for Language Model Agents☆212Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated 8 months ago
- ☆20Updated last year
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated last year