Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
☆79May 17, 2025Updated 9 months ago
Alternatives and similar repositories for Emma-X
Users that are interested in Emma-X are comparing it to the libraries listed below
Sorting:
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆369Apr 5, 2025Updated 10 months ago
- ☆14Feb 13, 2025Updated last year
- ☆16Mar 26, 2025Updated 11 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆468Jan 22, 2025Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆125Aug 21, 2025Updated 6 months ago
- ☆438Nov 29, 2025Updated 3 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆980Dec 20, 2025Updated 2 months ago
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated 10 months ago
- [CVPR 2025 highlight] Generating 6DoF Object Manipulation Trajectories from Action Description in Egocentric Vision☆36Dec 2, 2025Updated 3 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆338Oct 3, 2025Updated 5 months ago
- [CoRL 2024] Official code for "Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models"☆28Dec 11, 2024Updated last year
- ☆68Jan 8, 2025Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆159Apr 6, 2025Updated 10 months ago
- The official repository of "SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World".☆27Aug 20, 2025Updated 6 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆401Nov 8, 2025Updated 3 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆406Oct 30, 2025Updated 4 months ago
- Autoregressive Policy for Robot Learning (RA-L 2025)☆147Mar 25, 2025Updated 11 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,051Sep 9, 2025Updated 5 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆390Nov 11, 2025Updated 3 months ago
- ☆264Mar 17, 2024Updated last year
- ☆132Apr 25, 2023Updated 2 years ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,517Mar 15, 2025Updated 11 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆102Aug 22, 2024Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆115Apr 14, 2025Updated 10 months ago
- Official repo and evaluation implementation of VSI-Bench☆673Aug 5, 2025Updated 6 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆164Oct 1, 2025Updated 5 months ago
- ☆56Aug 7, 2025Updated 6 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆622Oct 29, 2024Updated last year
- [NeurIPS 2025] Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆133Nov 4, 2025Updated 3 months ago
- iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models☆21Jan 29, 2025Updated last year
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆36Jan 22, 2025Updated last year
- ☆89Sep 23, 2025Updated 5 months ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆213Jul 17, 2025Updated 7 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆227Mar 29, 2025Updated 11 months ago
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆207Jan 9, 2026Updated last month
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆338Jul 23, 2025Updated 7 months ago
- Official Code for RVT-2 and RVT☆398Feb 14, 2025Updated last year
- Official implementation of GR-MG☆93Jan 12, 2025Updated last year
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆209May 30, 2025Updated 9 months ago