google-research-datasets / screen_qaLinks
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆120Updated 5 months ago
Alternatives and similar repositories for screen_qa
Users that are interested in screen_qa are comparing it to the libraries listed below
Sorting:
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆72Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆119Updated 11 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆242Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆399Updated this week
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆119Updated 8 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆262Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆91Updated 9 months ago
- GUI Grounding for Professional High-Resolution Computer Use☆228Updated last week
- ☆77Updated 10 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆54Updated 8 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆332Updated 4 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆97Updated 6 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆133Updated 11 months ago
- ☆210Updated 2 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆87Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆221Updated 2 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆120Updated 2 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆119Updated 10 months ago
- a family of highly capabale yet efficient large multimodal models☆185Updated 10 months ago
- VisualWebArena is a benchmark for multimodal agents.☆357Updated 8 months ago
- ☆221Updated 2 months ago
- ☆142Updated last year
- A curated list of recent and past chart understanding work based on our IEEE TKDE survey paper: From Pixels to Insights: A Survey on Auto…☆211Updated 3 weeks ago
- ☆29Updated 9 months ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆356Updated 2 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆92Updated last month
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆86Updated last year
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆58Updated 3 years ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆61Updated 6 months ago
- ☆18Updated last year