OpenGVLab / GUI-Odyssey
GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
☆90Updated 3 months ago
Alternatives and similar repositories for GUI-Odyssey:
Users that are interested in GUI-Odyssey are comparing it to the libraries listed below
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆76Updated 4 months ago
- ☆28Updated 5 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆104Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆91Updated this week
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆103Updated 3 months ago
- A Self-Training Framework for Vision-Language Reasoning☆66Updated last month
- Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆108Updated last month
- (ICLR 2025) The Official Code Repository for GUI-World.☆52Updated 2 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆112Updated 3 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆190Updated 3 weeks ago
- Official repository of MMDU dataset☆85Updated 5 months ago
- ☆192Updated 3 months ago
- The model, data and code for the visual GUI Agent SeeClick☆323Updated 3 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆142Updated last month
- ☆133Updated last year
- ☆31Updated 8 months ago
- ☆12Updated 6 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆176Updated this week
- A Universal Platform for Training and Evaluation of Mobile Interaction☆41Updated this week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆47Updated 7 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆186Updated 6 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆82Updated last year
- [CVPR2025] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆136Updated this week
- ☆73Updated 11 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆267Updated 5 months ago
- ☆65Updated last month
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆43Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 8 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆202Updated this week
- MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models☆24Updated last month