OpenGVLab / GUI-Odyssey
GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
☆57Updated 2 months ago
Related projects: ⓘ
- ☆14Updated last week
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆68Updated 2 months ago
- This is the official implementation of the paper "Needle In A Multimodal Haystack"☆72Updated 2 months ago
- ☆53Updated 7 months ago
- ☆128Updated 8 months ago
- Official repository of MMDU dataset☆61Updated last month
- Touchstone: Evaluating Vision-Language Models by Language Models☆75Updated 8 months ago
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆122Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆23Updated 2 months ago
- ☆70Updated 6 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning"☆158Updated last week
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆143Updated 2 weeks ago
- ☆110Updated 4 months ago
- ☆46Updated 10 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆75Updated last month
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆21Updated 2 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆31Updated last month
- Towards Large Multimodal Models as Visual Foundation Agents☆87Updated 3 weeks ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆32Updated 10 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆114Updated 2 months ago
- Official repo for StableLLAVA☆90Updated 8 months ago
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆84Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆67Updated 5 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆75Updated 3 weeks ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆128Updated last month
- LVBench: An Extreme Long Video Understanding Benchmark☆51Updated 2 weeks ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆52Updated 2 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆112Updated 3 weeks ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆73Updated 2 months ago
- Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning☆93Updated 2 months ago