google-research-datasets / screen_qa
ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K question-answer pairs collected by human annotators for ~35K screenshots from Rico. It should be used to train and evaluate models capable of screen content understanding via question answering.
☆97Updated 6 months ago
Alternatives and similar repositories for screen_qa:
Users that are interested in screen_qa are comparing it to the libraries listed below
- The model, data and code for the visual GUI Agent SeeClick☆286Updated last month
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆213Updated 6 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆58Updated 10 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆95Updated 6 months ago
- UGround: Universal GUI Visual Grounding for GUI Agents☆138Updated this week
- Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆175Updated this week
- ☆66Updated 5 months ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆242Updated last week
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆130Updated 6 months ago
- InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions (AAAI2024)☆155Updated 7 months ago
- ☆128Updated 11 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆378Updated this week
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆82Updated 2 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆120Updated 3 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆261Updated 7 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆215Updated 3 weeks ago
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆315Updated 2 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆78Updated 3 weeks ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆45Updated 2 months ago
- A curated list of recent and past chart understanding work based on our survey paper: From Pixels to Insights: A Survey on Automatic Char…☆177Updated 5 months ago
- VisualWebArena is a benchmark for multimodal agents.☆273Updated 2 months ago
- ☆175Updated 6 months ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆26Updated 6 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆160Updated 3 weeks ago
- ☆120Updated 7 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆93Updated 3 weeks ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆199Updated 3 weeks ago
- ☆300Updated 4 months ago
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆59Updated 4 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆110Updated 4 months ago