yueyang130 / DeeR-VLA
Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"
☆54Updated 2 months ago
Alternatives and similar repositories for DeeR-VLA:
Users that are interested in DeeR-VLA are comparing it to the libraries listed below
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆64Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆78Updated 3 weeks ago
- ☆42Updated last month
- ☆47Updated 3 weeks ago
- ☆53Updated 3 months ago
- ☆56Updated 4 months ago
- ☆56Updated 2 weeks ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆88Updated last month
- Latent Motion Token as the Bridging Language for Robot Manipulation☆65Updated last month
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆86Updated last week
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆71Updated 3 months ago
- ☆43Updated 9 months ago
- LAPA: Latent Action Pretraining from Videos☆136Updated 3 weeks ago
- ☆59Updated 2 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆84Updated 3 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆124Updated 2 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆96Updated last week
- ☆21Updated 6 months ago
- ☆86Updated 5 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆43Updated 8 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆102Updated 6 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆96Updated 5 months ago
- Official implementation of GR-MG☆66Updated this week
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆36Updated last week
- A simple testbed for robotics manipulation policies☆72Updated last week
- LLaRA: Large Language and Robotics Assistant☆163Updated 3 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆102Updated 3 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆87Updated this week
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆82Updated 4 months ago
- The official repo for the paper "In-Context Imitation Learning via Next-Token Prediction"☆59Updated 2 months ago