CraftJarvis / JarvisVLALinks
Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"
☆122Updated 5 months ago
Alternatives and similar repositories for JarvisVLA
Users that are interested in JarvisVLA are comparing it to the libraries listed below
Sorting:
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 9 months ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆100Updated 7 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆199Updated 3 months ago
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆67Updated 2 years ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆93Updated 7 months ago
- ☆118Updated 9 months ago
- Official implementation of "Self-Improving Video Generation"☆78Updated 9 months ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆294Updated last year
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆186Updated 4 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆260Updated 3 months ago
- ☆43Updated 4 months ago
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents framework☆83Updated 7 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆194Updated 7 months ago
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆206Updated 3 weeks ago
- Evaluate Multimodal LLMs as Embodied Agents☆57Updated 11 months ago
- ☆30Updated last year
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆40Updated 2 years ago
- ☆46Updated 2 years ago
- ☆114Updated 6 months ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆102Updated 6 months ago
- Virtual Community: An Open World for Humans, Robots, and Society☆181Updated last month
- Dream-VL and Dream-VLA, a diffusion VLM and a diffusion VLA.☆93Updated 2 weeks ago
- Official repository of S-Agents: Self-organizing Agents in Open-ended Environment☆26Updated last year
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆77Updated 2 months ago
- [EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time☆89Updated 7 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆339Updated last year
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 10 months ago
- DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning☆166Updated 2 months ago
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligence☆74Updated last week