OpenGVLab / GUI-Odyssey
GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
☆85Updated 2 months ago
Alternatives and similar repositories for GUI-Odyssey:
Users that are interested in GUI-Odyssey are comparing it to the libraries listed below
- ☆27Updated 4 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆69Updated 3 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆96Updated 6 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆80Updated 2 weeks ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆95Updated 2 months ago
- ☆73Updated 10 months ago
- Official repository of MMDU dataset☆82Updated 4 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆42Updated 6 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆86Updated this week
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆81Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆169Updated last month
- ☆29Updated 7 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆176Updated 4 months ago
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆98Updated 6 months ago
- ☆132Updated last year
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆132Updated last week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆188Updated 3 weeks ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆61Updated 3 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆72Updated 3 months ago
- The model, data and code for the visual GUI Agent SeeClick☆294Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆61Updated last week
- A Survey on Benchmarks of Multimodal Large Language Models☆83Updated 3 weeks ago
- MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models☆23Updated last week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆259Updated 4 months ago
- ☆59Updated 11 months ago
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆127Updated last month
- GUI Grounding for Professional High-Resolution Computer Use☆22Updated 2 weeks ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆102Updated last week