declare-lab / Emma-X
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
☆54Updated last month
Alternatives and similar repositories for Emma-X:
Users that are interested in Emma-X are comparing it to the libraries listed below
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated 11 months ago
- ☆46Updated 3 months ago
- ☆45Updated 11 months ago
- ☆50Updated last month
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆109Updated 5 months ago
- Official implementation of GR-MG☆76Updated 2 months ago
- ☆56Updated last week
- ☆24Updated 9 months ago
- ☆62Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆95Updated this week
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆86Updated 7 months ago
- Unified Video Action Model☆123Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆107Updated last week
- ☆67Updated 6 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆48Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆95Updated 3 months ago
- A simple testbed for robotics manipulation policies☆79Updated 3 weeks ago
- Human Demo Videos to Robot Action Plans☆46Updated 4 months ago
- Official Repository of SAM2Act☆65Updated last month
- ☆67Updated 2 weeks ago
- ManiBox: Enhancing Spatial Grasping Generalization via Scalable Simulation Data Generation☆41Updated 2 weeks ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆103Updated 3 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆73Updated 8 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆120Updated this week
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆80Updated 7 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆77Updated this week
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆127Updated 5 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆84Updated last month
- The official repo for the paper "In-Context Imitation Learning via Next-Token Prediction"☆69Updated last week