OpenGVLab / GUI-Odyssey
GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
☆98Updated 4 months ago
Alternatives and similar repositories for GUI-Odyssey:
Users that are interested in GUI-Odyssey are comparing it to the libraries listed below
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆81Updated 5 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆106Updated 8 months ago
- ☆28Updated 6 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆98Updated last month
- A Universal Platform for Training and Evaluation of Mobile Interaction☆44Updated last month
- Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆118Updated last week
- Towards Large Multimodal Models as Visual Foundation Agents☆195Updated last month
- A Self-Training Framework for Vision-Language Reasoning☆73Updated 2 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆53Updated 3 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆117Updated 4 months ago
- Official repository of MMDU dataset☆86Updated 6 months ago
- The model, data and code for the visual GUI Agent SeeClick☆349Updated 4 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆52Updated 8 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 4 months ago
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆105Updated 8 months ago
- ☆32Updated 9 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆195Updated last week
- ☆206Updated last week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆81Updated 9 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆209Updated last week
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆197Updated 7 months ago
- ☆70Updated 2 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆33Updated 7 months ago
- ☆17Updated 11 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆84Updated 2 weeks ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆102Updated 3 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆65Updated 4 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆82Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆41Updated 9 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆148Updated 2 weeks ago