google-research-datasets / screen_qa
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆91Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for screen_qa
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆84Updated 4 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆358Updated this week
- InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions (AAAI2024)☆146Updated 5 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆49Updated 8 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆198Updated 4 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆175Updated 4 months ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- a family of highly capabale yet efficient large multimodal models☆167Updated 3 months ago
- ☆127Updated 9 months ago
- Official Repo for UGround☆100Updated 2 weeks ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆75Updated last month
- Expert Specialized Fine-Tuning☆148Updated 2 months ago
- WebLINX is a benchmark for building web navigation agents with conversational capabilities☆118Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆227Updated this week
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆258Updated 5 months ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆174Updated this week
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆50Updated 7 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆312Updated 5 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆266Updated 2 weeks ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆47Updated last month
- ☆64Updated 3 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆69Updated last week
- ☆116Updated 5 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆69Updated 2 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆57Updated 4 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆102Updated last month
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆115Updated 2 weeks ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆69Updated last month
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆156Updated 8 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆186Updated 4 months ago