CraftJarvis / JarvisVLALinks
Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"
☆115Updated 4 months ago
Alternatives and similar repositories for JarvisVLA
Users that are interested in JarvisVLA are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆99Updated 6 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 8 months ago
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- Official implementation of "Self-Improving Video Generation"☆76Updated 8 months ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆293Updated last year
- ☆46Updated 2 years ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated 2 years ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆177Updated 2 months ago
- ☆118Updated 9 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆91Updated 6 months ago
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents framework☆82Updated 6 months ago
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆66Updated 2 years ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆190Updated 6 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆249Updated 2 months ago
- ☆30Updated last year
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆200Updated last month
- Evaluate Multimodal LLMs as Embodied Agents☆56Updated 10 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆185Updated 3 months ago
- Virtual Community: An Open World for Humans, Robots, and Society☆178Updated last week
- ☆112Updated 5 months ago
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆76Updated last month
- Official repository of S-Agents: Self-organizing Agents in Open-ended Environment☆26Updated last year
- ☆68Updated 3 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 9 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆126Updated 5 months ago
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆48Updated 6 months ago
- Visual Planning: Let's Think Only with Images☆290Updated 7 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆152Updated 3 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 6 months ago