AkimotoAyako / VisionTaskerLinks
VisionTasker introduces a novel two-stage framework combining vision-based UI understanding and LLM task planning for mobile task automation in a step-by-step manner.
☆101Updated 6 months ago
Alternatives and similar repositories for VisionTasker
Users that are interested in VisionTasker are comparing it to the libraries listed below
Sorting:
- Source code for the paper "Empowering LLM to use Smartphone for Intelligent Task Automation"☆440Updated last year
- Official implementation of AppAgentX: Evolving GUI Agents as Proficient Smartphone Users☆600Updated 9 months ago
- AndroidWorld is an environment and benchmark for autonomous agents☆611Updated this week
- The model, data and code for the visual GUI Agent SeeClick☆461Updated 6 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆98Updated last year
- ☆44Updated last year
- ☆34Updated last year
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆217Updated 7 months ago
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆57Updated 6 months ago
- 《MobileUse: A Hierarchical Reflection-Driven GUI Agent for Autonomous Mobile Operation》☆131Updated last month
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆147Updated last month
- This is the official repository of the paper "Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Schedulin…☆13Updated 6 months ago
- LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation☆67Updated last year
- VisionDroid☆21Updated last year
- ☆34Updated 5 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆177Updated 4 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆60Updated 4 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆379Updated 11 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆297Updated 6 months ago
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆143Updated 2 months ago
- AUITestAgent is the first automatic, natural language-driven GUI testing tool for mobile apps, capable of fully automating the entire pro…☆284Updated last year
- GUI Grounding for Professional High-Resolution Computer Use☆326Updated last month
- AgentCPM-GUI: An on-device GUI agent for operating Android apps, enhancing reasoning ability with reinforcement fine-tuning for efficient…☆1,289Updated 3 weeks ago
- [NeurIPS'25] GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents☆376Updated 3 months ago
- DroidAgent: Intent-Driven Mobile GUI Testing with Autonomous LLM Agents☆56Updated last year
- ScreenAgent: A Computer Control Agent Driven by Visual Language Large Model (IJCAI-24)☆565Updated last year
- ☆297Updated 5 months ago
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆1,098Updated 5 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Updated last year
- ☆23Updated last year