XiaoMi / mobilevlmLinks
MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding
☆75Updated 9 months ago
Alternatives and similar repositories for mobilevlm
Users that are interested in mobilevlm are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆137Updated 4 months ago
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆95Updated last year
- Repository for the paper "InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners"☆61Updated 2 weeks ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆140Updated last year
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆142Updated 3 weeks ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆134Updated last year
- ☆31Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆445Updated 5 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆173Updated 2 months ago
- ☆62Updated 3 months ago
- ☆261Updated 4 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆118Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆97Updated last year
- [TMLR] LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects☆133Updated 2 weeks ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆211Updated 2 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆365Updated 3 months ago
- Official repository of MMDU dataset☆98Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆275Updated 6 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆163Updated 2 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆432Updated 7 months ago
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆206Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆299Updated last year
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆287Updated 5 months ago
- ☆187Updated 10 months ago
- ☆35Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆115Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆191Updated 7 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆96Updated last week
- ☆108Updated last month