google-research-datasets / screen_qa
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆107Updated last month
Alternatives and similar repositories for screen_qa:
Users that are interested in screen_qa are comparing it to the libraries listed below
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆63Updated last year
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆223Updated 8 months ago
- The model, data and code for the visual GUI Agent SeeClick☆336Updated 3 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆103Updated 8 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆93Updated 4 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆189Updated this week
- Towards Large Multimodal Models as Visual Foundation Agents☆192Updated last month
- a family of highly capabale yet efficient large multimodal models☆178Updated 6 months ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆54Updated 3 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆265Updated 9 months ago
- Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆256Updated 2 weeks ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆77Updated 5 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆290Updated 2 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆99Updated 2 months ago
- ☆196Updated 3 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆75Updated 4 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆234Updated 2 months ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆294Updated last month
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆330Updated 3 weeks ago
- ☆131Updated last year
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆50Updated 4 months ago
- ☆137Updated 9 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆332Updated 2 months ago
- ☆69Updated 7 months ago
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆328Updated last month
- ✨ ✨Latest Papers and Datasets on Mobile and PC GUI Agent☆115Updated 3 months ago
- InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions (AAAI2024)☆160Updated 9 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆134Updated 8 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆95Updated 2 months ago
- VisualWebArena is a benchmark for multimodal agents.☆317Updated 4 months ago