tulerfeng / Awesome-Embodied-Multimodal-LLMsLinks
Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).
☆121Updated last year
Alternatives and similar repositories for Awesome-Embodied-Multimodal-LLMs
Users that are interested in Awesome-Embodied-Multimodal-LLMs are comparing it to the libraries listed below
Sorting:
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆155Updated 2 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆165Updated last week
- InternVLA-M1: A Spatially Grounded Foundation Model for Generalist Robot Policy☆122Updated last week
- [CVPR2024] This is the official implement of MP5☆104Updated last year
- WorldVLA: Towards Autoregressive Action World Model☆435Updated last month
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆81Updated 4 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆309Updated 3 weeks ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆172Updated last week
- Unified Vision-Language-Action Model☆198Updated 2 months ago
- [NeurIPS2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆188Updated 2 weeks ago
- Latest Advances on Vison-Language-Action Models.☆110Updated 7 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆289Updated this week
- ☆83Updated 4 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆190Updated 2 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆138Updated 9 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆111Updated 7 months ago
- ☆54Updated last year
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆276Updated last week
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆210Updated this week
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆43Updated 2 weeks ago
- ☆429Updated last year
- ☆51Updated 6 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆110Updated last month
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆28Updated this week
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆83Updated last month
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 6 months ago
- Nav-R1: Reasoning and Navigation in Embodied Scenes☆52Updated last week
- ☆16Updated 2 months ago
- ☆57Updated 7 months ago