showlab / ShowUILinks
[CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.
β1,466Updated 3 months ago
Alternatives and similar repositories for ShowUI
Users that are interested in ShowUI are comparing it to the libraries listed below
Sorting:
- Out-of-the-box (OOTB) GUI Agent for Windows and macOSβ1,667Updated 3 months ago
- π» A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.β890Updated 3 weeks ago
- An open-sourced end-to-end VLM-based GUI Agentβ1,049Updated 5 months ago
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interactionβ356Updated 6 months ago
- Windows Agent Arena (WAA) πͺ is a scalable OS platform for testing and benchmarking of multi-modal AI agents.β764Updated 4 months ago
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agentsβ1,802Updated 3 months ago
- ScreenAgent: A Computer Control Agent Driven by Visual Language Large Model (IJCAI-24)β505Updated 9 months ago
- β939Updated 5 months ago
- GUI Grounding for Professional High-Resolution Computer Useβ251Updated last month
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agentsβ379Updated 4 months ago
- The model, data and code for the visual GUI Agent SeeClickβ422Updated 2 months ago
- Code for "WebVoyager: WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models"β911Updated last year
- GUI-Actor: Coordinate-Free Visual Grounding for GUI Agentsβ331Updated last month
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"β364Updated last week
- This is a collection of resources for computer-use GUI agents, including videos, blogs, papers, and projects.β433Updated 3 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,416Updated 3 months ago
- Official implementation of AppAgentX: Evolving GUI Agents as Proficient Smartphone Usersβ513Updated 5 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoningβ2,059Updated last week
- An LLM-based Web Navigating Agent (KDD'24)β884Updated 11 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agentsβ276Updated last month
- AgentCPM-GUI: An on-device GUI agent for operating Android apps, enhancing reasoning ability with reinforcement fine-tuning for efficientβ¦β1,014Updated 3 months ago
- AndroidWorld is an environment and benchmark for autonomous agentsβ428Updated 2 weeks ago
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large multβ¦β782Updated 7 months ago
- β867Updated last week
- ππͺ BrowserGym, a Gym environment for web task automationβ878Updated this week
- A LLM-based Agent that predict its tasks proactively.β417Updated 3 weeks ago
- An Open Large Reasoning Model for Real-World Solutionsβ1,515Updated 3 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.β1,333Updated last week
- Atom of Thoughts for Markov LLM Test-Time Scalingβ585Updated 2 months ago
- Codebase for Aria - an Open Multimodal Native MoEβ1,067Updated 7 months ago