RUCBM / GUICourseLinks
GUICourse: From General Vision Langauge Models to Versatile GUI Agents
☆134Updated last year
Alternatives and similar repositories for GUICourse
Users that are interested in GUICourse are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆138Updated 5 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆96Updated last year
- ☆31Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆248Updated 8 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆66Updated last year
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆291Updated 5 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆254Updated last year
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆140Updated 7 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆173Updated 2 months ago
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆142Updated last month
- [NeurIPS 2025 Spotlight] Scaling Computer-Use Grounding via UI Decomposition and Synthesis☆135Updated 2 months ago
- ☆121Updated 3 months ago
- A Self-Training Framework for Vision-Language Reasoning☆88Updated 11 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆58Updated 3 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆132Updated 9 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆62Updated last year
- ☆20Updated last year
- ☆35Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago
- The model, data and code for the visual GUI Agent SeeClick☆451Updated 5 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆119Updated last year
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆63Updated 11 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated last year
- An Illusion of Progress? Assessing the Current State of Web Agents☆129Updated this week
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆105Updated last year
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆126Updated 7 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆210Updated 8 months ago
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆104Updated 5 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆137Updated 8 months ago