AkimotoAyako / VisionTaskerLinks
VisionTasker introduces a novel two-stage framework combining vision-based UI understanding and LLM task planning for mobile task automation in a step-by-step manner.
☆79Updated 4 months ago
Alternatives and similar repositories for VisionTasker
Users that are interested in VisionTasker are comparing it to the libraries listed below
Sorting:
- Source code for the paper "Empowering LLM to use Smartphone for Intelligent Task Automation"☆364Updated last year
- AndroidWorld is an environment and benchmark for autonomous agents☆342Updated this week
- Official implementation of AppAgentX: Evolving GUI Agents as Proficient Smartphone Users☆444Updated 2 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆89Updated 8 months ago
- LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation☆62Updated 10 months ago
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆37Updated 3 weeks ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆47Updated 4 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆141Updated this week
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆116Updated 7 months ago
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆45Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆391Updated 7 months ago
- MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding☆66Updated 4 months ago
- GUI Grounding for Professional High-Resolution Computer Use☆213Updated last month
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆126Updated 6 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆167Updated this week
- ☆42Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆239Updated 11 months ago
- ☆41Updated last year
- ☆30Updated 8 months ago
- Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆113Updated last month
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆43Updated 7 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆319Updated 3 months ago
- LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects☆84Updated last month
- (ICLR 2025) The Official Code Repository for GUI-World.☆60Updated 6 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆216Updated 2 months ago
- ☆29Updated 9 months ago
- ☆217Updated last month
- Automating Android apps with ChatGPT-like LLM.☆128Updated last year
- AUITestAgent is the first automatic, natural language-driven GUI testing tool for mobile apps, capable of fully automating the entire pro…☆238Updated 11 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆254Updated 3 weeks ago