runamu / mondayLinks
[CVPR 2025] Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents
☆32Updated 8 months ago
Alternatives and similar repositories for monday
Users that are interested in monday are comparing it to the libraries listed below
Sorting:
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆220Updated 9 months ago
- ☆110Updated last year
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆98Updated 8 months ago
- Official repo of "MMBench-GUI: Hierarchical Multi-Platform Evaluation Framework for GUI Agents". It can be used to evaluate a GUI agent w…☆99Updated 5 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆203Updated last year
- A RLHF Infrastructure for Vision-Language Models☆196Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆182Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆84Updated 3 months ago
- [ICCV 2025] GUIOdyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUIOdyssey consists of 8,834 e…☆147Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆123Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆88Updated last year
- A Universal Platform for Training and Evaluation of Mobile Interaction☆60Updated 4 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆104Updated 4 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆277Updated 6 months ago
- ☆34Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆100Updated 2 years ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆233Updated 3 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆176Updated 4 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆86Updated last year
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆148Updated 8 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆286Updated 8 months ago
- [ICCV 2025 Highlight] Less is More: Empowering GUI Agent with Context-Aware Simplification☆41Updated 3 weeks ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆101Updated 4 months ago
- [AAAI-2026] Code for "UI-R1: Enhancing Efficient Action Prediction of GUI Agents by Reinforcement Learning"☆143Updated 2 months ago
- ☆155Updated last year
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆84Updated last month
- A Survey on Benchmarks of Multimodal Large Language Models☆147Updated 7 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆151Updated 3 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆145Updated last year