AkimotoAyako / VisionTaskerLinks
VisionTasker introduces a novel two-stage framework combining vision-based UI understanding and LLM task planning for mobile task automation in a step-by-step manner.
☆95Updated 3 months ago
Alternatives and similar repositories for VisionTasker
Users that are interested in VisionTasker are comparing it to the libraries listed below
Sorting:
- Source code for the paper "Empowering LLM to use Smartphone for Intelligent Task Automation"☆405Updated last year
- AndroidWorld is an environment and benchmark for autonomous agents☆497Updated last week
- Official implementation of AppAgentX: Evolving GUI Agents as Proficient Smartphone Users☆545Updated 6 months ago
- The model, data and code for the visual GUI Agent SeeClick☆435Updated 3 months ago
- ☆44Updated last year
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆205Updated 4 months ago
- ☆31Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆94Updated last year
- This is the official repository of the paper "Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Schedulin…☆11Updated 3 months ago
- 《MobileUse: A Hierarchical Reflection-Driven GUI Agent for Autonomous Mobile Operation》☆105Updated last week
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆47Updated 3 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆140Updated 11 months ago
- GUI Grounding for Professional High-Resolution Computer Use☆275Updated last week
- Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆134Updated 5 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆367Updated 8 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆166Updated last month
- DroidAgent: Intent-Driven Mobile GUI Testing with Autonomous LLM Agents☆45Updated last year
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆958Updated 2 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆284Updated 3 months ago
- LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation☆64Updated last year
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆130Updated 3 months ago
- [NeurIPS'25] GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents☆351Updated last week
- ☆246Updated 2 months ago
- VisionDroid☆18Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆55Updated last month
- ☆76Updated 2 months ago
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆60Updated 5 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆241Updated 6 months ago
- PC Agent: While You Sleep, AI Works - A Cognitive Journey into Digital World☆295Updated 5 months ago
- AUITestAgent is the first automatic, natural language-driven GUI testing tool for mobile apps, capable of fully automating the entire pro…☆268Updated last year