declare-lab / Emma-XLinks
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
☆79Updated 7 months ago
Alternatives and similar repositories for Emma-X
Users that are interested in Emma-X are comparing it to the libraries listed below
Sorting:
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆115Updated 4 months ago
- ☆60Updated last year
- ☆63Updated 10 months ago
- ☆89Updated last year
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 7 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆154Updated 9 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆128Updated 4 months ago
- ICCV2025☆145Updated 3 weeks ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆88Updated 3 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆112Updated 8 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆60Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆148Updated last year
- ☆100Updated 2 weeks ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆130Updated 3 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆112Updated 3 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆153Updated this week
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- ☆68Updated 10 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆217Updated 3 weeks ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆157Updated 3 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 7 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆97Updated last month
- ☆64Updated 11 months ago
- Official implementation of GR-MG☆93Updated 11 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆122Updated 10 months ago
- ☆56Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆105Updated 9 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- Unified Vision-Language-Action Model☆257Updated 2 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆194Updated 4 months ago