lmzpai / roboMambaLinks
The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`
☆129Updated 7 months ago
Alternatives and similar repositories for roboMamba
Users that are interested in roboMamba are comparing it to the libraries listed below
Sorting:
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆100Updated 5 months ago
- ICCV2025☆112Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆135Updated 4 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆191Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆266Updated last month
- 🦾 A Dual-System VLA with System2 Thinking☆84Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆163Updated 2 months ago
- WorldVLA: Towards Autoregressive Action World Model☆310Updated last month
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆285Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆230Updated 2 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆238Updated 2 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆138Updated this week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆269Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆311Updated 2 months ago
- ☆64Updated 5 months ago
- ☆55Updated 5 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆150Updated 9 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆415Updated last month
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆139Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated 11 months ago
- Official Code For VLA-OS.☆78Updated last month
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆121Updated last month
- Unified Vision-Language-Action Model☆165Updated 2 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆190Updated 2 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆341Updated 6 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆96Updated 3 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆220Updated 4 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆284Updated 4 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆139Updated 11 months ago