Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"
☆378Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for ScreenAI
Users that are interested in ScreenAI are comparing it to the libraries listed below
Sorting:
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆84Mar 7, 2024Updated last year
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆139Feb 7, 2025Updated last year
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆136Jul 17, 2024Updated last year
- ScreenAgent: A Computer Control Agent Driven by Visual Language Large Model (IJCAI-24)☆567Nov 25, 2024Updated last year
- A pre labelled dataset for ui element / layout detection☆67Jun 15, 2023Updated 2 years ago
- The model, data and code for the visual GUI Agent SeeClick☆467Jul 13, 2025Updated 7 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆147Jan 3, 2026Updated last month
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆99Oct 14, 2024Updated last year
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆15Nov 11, 2024Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆27Jan 17, 2026Updated last month
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large mult…☆826Feb 3, 2025Updated last year
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆36Jan 31, 2026Updated last month
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Jul 17, 2024Updated last year
- Various agents from all of the top agent frameworks to integrate into swarms! Langchain, Griptape, CrewAI, and more!☆18Dec 22, 2025Updated 2 months ago
- A swarm of LLM agents that will help you test, document, and productionize your code!☆16Feb 16, 2026Updated last week
- ☆20Apr 24, 2024Updated last year
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆63Jul 27, 2021Updated 4 years ago
- One-Click Legal-Swarm-Template☆16Dec 10, 2024Updated last year
- VisualWebArena is a benchmark for multimodal agents.☆436Nov 9, 2024Updated last year
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Jul 31, 2024Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆64Oct 19, 2024Updated last year
- Open-sourced, Fast and Context-aware Action Grounding from GUI Instructions for GUI/Computer-use Agents☆398Feb 8, 2025Updated last year
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Jan 27, 2025Updated last year
- Open Source Generative Process Automation (i.e. Generative RPA). AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs)…☆1,491Feb 18, 2026Updated last week
- Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT☆16Feb 6, 2026Updated 3 weeks ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆435Apr 20, 2025Updated 10 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Jul 16, 2024Updated last year
- AI agent using GPT-4V(ision) capable of using a mouse/keyboard to interact with web UI☆1,066Dec 9, 2024Updated last year
- ☆18Jan 3, 2025Updated last year
- An accurate GUI element detection approach based on old-fashioned CV algorithms [Upgraded on 5/July/2021]☆523Nov 8, 2023Updated 2 years ago
- ☆44Mar 19, 2024Updated last year
- Automate UI testing + functionality testing with GPT-4 Vision☆45Dec 17, 2023Updated 2 years ago
- [ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents☆228Jun 16, 2025Updated 8 months ago
- AppAgent: Multimodal Agents as Smartphone Users, an LLM-based multimodal agent framework designed to operate smartphone apps.☆6,545Mar 19, 2025Updated 11 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆48Feb 27, 2025Updated last year
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,723May 29, 2024Updated last year
- Mobile-Agent: The Powerful GUI Agent Family☆7,338Updated this week
- 💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.☆1,115Aug 17, 2025Updated 6 months ago