(ICLR 2025) The Official Code Repository for GUI-World.
☆68Dec 18, 2024Updated last year
Alternatives and similar repositories for GUI-World
Users that are interested in GUI-World are comparing it to the libraries listed below
Sorting:
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆136Updated this week
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆147Jan 3, 2026Updated last month
- ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World☆24Jun 17, 2025Updated 8 months ago
- ☆31Sep 27, 2024Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆467Jul 13, 2025Updated 7 months ago
- [ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents☆228Jun 16, 2025Updated 8 months ago
- ☆12Aug 8, 2024Updated last year
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆300Jul 18, 2025Updated 7 months ago
- (NAACL 2024) Official code repository for Mixset.☆27Dec 4, 2024Updated last year
- ☆35Jan 12, 2026Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆99Oct 14, 2024Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Jul 16, 2024Updated last year
- ☆35Sep 30, 2024Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆64Oct 19, 2024Updated last year
- [ICLR2025 Spotlight] Agent Trajectory Synthesis via Guiding Replay with Web Tutorials☆52Feb 21, 2025Updated last year
- ☆20Apr 24, 2024Updated last year
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆1,115Aug 17, 2025Updated 6 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆48Feb 27, 2025Updated last year
- [CVPR 2025] GUI-Xplore: Empowering Generalizable GUI Agents with One Exploration☆20Mar 21, 2025Updated 11 months ago
- ☆118Apr 8, 2025Updated 10 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆46Jun 25, 2024Updated last year
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆387Feb 22, 2025Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Dec 12, 2024Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆60Sep 24, 2025Updated 5 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆381Mar 7, 2025Updated 11 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆256Apr 24, 2025Updated 10 months ago
- Web-grounded natural language instructions☆18Nov 25, 2024Updated last year
- Official code repo for the paper "LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark"☆46May 16, 2025Updated 9 months ago
- ☆301Aug 18, 2025Updated 6 months ago
- Custom object detection for UI of the design system using TensorFlow☆16Jun 20, 2023Updated 2 years ago
- UQ: Assessing Language Models on Unsolved Questions☆30Aug 26, 2025Updated 6 months ago
- ☆17Mar 30, 2024Updated last year
- Building a comprehensive and handy list of papers for GUI agents☆641Oct 27, 2025Updated 4 months ago
- ☆19May 19, 2024Updated last year
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆437Apr 20, 2025Updated 10 months ago
- ☆44Apr 11, 2024Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆139Feb 7, 2025Updated last year
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆136Aug 26, 2024Updated last year
- ☆18Nov 1, 2024Updated last year