declare-lab / Emma-XLinks
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
☆73Updated 3 months ago
Alternatives and similar repositories for Emma-X
Users that are interested in Emma-X are comparing it to the libraries listed below
Sorting:
- 🦾 A Dual-System VLA with System2 Thinking☆92Updated last week
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆138Updated 4 months ago
- ☆55Updated 8 months ago
- ☆78Updated 11 months ago
- ☆55Updated 6 months ago
- ICCV2025☆114Updated this week
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆104Updated 6 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆102Updated 4 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆51Updated last week
- Unified Vision-Language-Action Model☆181Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆178Updated 2 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆135Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆132Updated 8 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆158Updated this week
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- ☆80Updated 3 weeks ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆123Updated last month
- ☆64Updated 6 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆53Updated 6 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆122Updated 3 months ago
- ☆52Updated last year
- Efficiently apply modification functions to RLDS/TFDS datasets.☆32Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆135Updated 2 weeks ago
- Official Code For VLA-OS.☆94Updated 2 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆117Updated 10 months ago
- Official implementation of GR-MG☆85Updated 7 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆223Updated 5 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆130Updated 10 months ago
- ☆108Updated last month
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆79Updated 2 months ago