AkimotoAyako / VisionTasker
VisionTasker introduces a novel two-stage framework combining vision-based UI understanding and LLM task planning for mobile task automation in a step-by-step manner.
☆65Updated last month
Alternatives and similar repositories for VisionTasker:
Users that are interested in VisionTasker are comparing it to the libraries listed below
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆82Updated 5 months ago
- The model, data and code for the visual GUI Agent SeeClick☆357Updated 4 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆101Updated 5 months ago
- LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation☆56Updated 8 months ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆44Updated last month
- ☆28Updated 6 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆142Updated 2 weeks ago
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆29Updated last month
- AndroidWorld is an environment and benchmark for autonomous agents☆264Updated this week
- ☆28Updated 6 months ago
- Official implementation of AppAgentX: Evolving GUI Agents as Proficient Smartphone Users☆306Updated last month
- ☆69Updated this week
- ☆40Updated last year
- ☆18Updated 6 months ago
- Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆121Updated last week
- GUI Grounding for Professional High-Resolution Computer Use☆172Updated last month
- DroidAgent: Intent-Driven Mobile GUI Testing with Autonomous LLM Agents☆25Updated last year
- Source code for the paper "Empowering LLM to use Smartphone for Intelligent Task Automation"☆341Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆202Updated 2 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆120Updated 4 months ago
- ☆210Updated 2 weeks ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆107Updated 8 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆39Updated 5 months ago
- Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆275Updated last month
- (ICLR 2025) The Official Code Repository for GUI-World.☆53Updated 3 months ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆28Updated 9 months ago
- ☆36Updated last year
- ☆13Updated 6 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆231Updated 8 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆201Updated 3 weeks ago