tulerfeng / Awesome-Embodied-Multimodal-LLMs
Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).
☆96Updated 7 months ago
Alternatives and similar repositories for Awesome-Embodied-Multimodal-LLMs:
Users that are interested in Awesome-Embodied-Multimodal-LLMs are comparing it to the libraries listed below
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆75Updated this week
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆208Updated 3 weeks ago
- The Official Implementation of RoboMatrix☆80Updated last month
- ☆355Updated 9 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆68Updated last week
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆160Updated 8 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆61Updated last week
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆102Updated 11 months ago
- Official repo and evaluation implementation of VSI-Bench☆388Updated 3 weeks ago
- [CVPR2024] This is the official implement of MP5☆95Updated 7 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆83Updated 4 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆83Updated 2 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆124Updated 3 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆117Updated 2 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆413Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆84Updated 2 months ago
- ☆101Updated 3 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆120Updated last week
- Awesome Papers about World Models in Autonomous Driving☆76Updated 9 months ago
- [RSS 2024] NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation☆75Updated 2 weeks ago
- [ECCV 2024] TOD3Cap: Towards 3D Dense Captioning in Outdoor Scenes☆111Updated last month
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆52Updated 4 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆98Updated last month
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆32Updated 10 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆42Updated 3 weeks ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆100Updated 6 months ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆190Updated last year
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆21Updated 2 months ago
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆115Updated 5 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆47Updated 2 months ago