[ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 212 apps, and 1.4K app combos.
☆147Jan 3, 2026Updated last month
Alternatives and similar repositories for GUI-Odyssey
Users that are interested in GUI-Odyssey are comparing it to the libraries listed below
Sorting:
- ☆31Sep 27, 2024Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆136Jul 17, 2024Updated last year
- (ICLR 2025) The Official Code Repository for GUI-World.☆68Dec 18, 2024Updated last year
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Jul 31, 2024Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆467Jul 13, 2025Updated 7 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆99Oct 14, 2024Updated last year
- ☆12Aug 8, 2024Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆60Sep 24, 2025Updated 5 months ago
- ☆301Aug 18, 2025Updated 6 months ago
- ☆44Apr 11, 2024Updated last year
- ☆35Sep 30, 2024Updated last year
- ☆44Mar 19, 2024Updated last year
- This repository contains the opensource version of the datasets were used for different parts of training and testing of models that grou…☆34Aug 20, 2020Updated 5 years ago
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆387Feb 22, 2025Updated last year
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆150Nov 29, 2024Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Jul 16, 2024Updated last year
- MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding☆77Feb 27, 2025Updated last year
- Benchmarking Mobile Device Control Agents across Diverse Configurations (ICLR 2024 workshop GenAI4DM spotlight presentation; CoLLAs 2025)☆35Jul 21, 2025Updated 7 months ago
- ☆20Apr 24, 2024Updated last year
- Building a comprehensive and handy list of papers for GUI agents☆636Oct 27, 2025Updated 4 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆179Oct 8, 2025Updated 4 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆84Mar 7, 2024Updated last year
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆1,115Aug 17, 2025Updated 6 months ago
- Recognize graphic user interface layout through grouping GUI elements according to their visual attributes☆49Jun 17, 2022Updated 3 years ago
- AndroidWorld is an environment and benchmark for autonomous agents☆635Feb 20, 2026Updated last week
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆146Nov 24, 2025Updated 3 months ago
- ScreenAgent: A Computer Control Agent Driven by Visual Language Large Model (IJCAI-24)☆567Nov 25, 2024Updated last year
- [CVPR 2025] GUI-Xplore: Empowering Generalizable GUI Agents with One Exploration☆20Mar 21, 2025Updated 11 months ago
- LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation☆67Aug 9, 2024Updated last year
- ClickAgent: Enhancing UI Location Capabilities of Autonomous Agents☆28Oct 28, 2024Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Jul 17, 2024Updated last year
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Aug 19, 2024Updated last year
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆109Jul 17, 2025Updated 7 months ago
- ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World☆24Jun 17, 2025Updated 8 months ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆53Jan 14, 2022Updated 4 years ago
- [NeurIPS 2025] UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents☆53Nov 27, 2025Updated 3 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆510Jun 6, 2025Updated 8 months ago
- DroidAgent: Intent-Driven Mobile GUI Testing with Autonomous LLM Agents☆58Mar 12, 2024Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆256Apr 24, 2025Updated 10 months ago