ZJU-ACES-ISE / ChatUITestLinks
Under construction
☆11Updated 8 months ago
Alternatives and similar repositories for ChatUITest
Users that are interested in ChatUITest are comparing it to the libraries listed below
Sorting:
- VisionDroid☆18Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆91Updated 11 months ago
- A Lightweight Visual Reasoning Benchmark for Evaluating Large Multimodal Models through Complex Diagrams in Coding Tasks☆12Updated 6 months ago
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆59Updated 3 months ago
- ☆22Updated 11 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆128Updated last month
- ☆31Updated 11 months ago
- ☆21Updated 4 months ago
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆42Updated 2 months ago
- Official code repo for the paper "LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark"☆41Updated 4 months ago
- ☆30Updated 11 months ago
- ☆36Updated last year
- LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects☆115Updated 4 months ago
- Code repo for "Harnessing Negative Signals: Reinforcement Distillation from Teacher Data for LLM Reasoning"☆28Updated last month
- CVPR25☆24Updated 2 months ago
- Owl Eyes: Spotting UI Display Issues via Visual Understanding☆11Updated 5 years ago
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆90Updated 2 months ago
- ☆16Updated last year
- VisionTasker introduces a novel two-stage framework combining vision-based UI understanding and LLM task planning for mobile task automat…☆88Updated 2 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆23Updated 2 months ago
- Multimodal Large Language Models for Code Generation under Multimodal Scenarios☆156Updated this week
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆184Updated 4 months ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆30Updated last year
- ☆11Updated last year
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆56Updated 3 months ago
- Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning☆48Updated last year
- ☆32Updated 2 months ago
- Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆127Updated 3 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆65Updated 9 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆192Updated 2 months ago