EmbodiedGPT / EgoCOT_Dataset
☆48Updated last year
Alternatives and similar repositories for EgoCOT_Dataset:
Users that are interested in EgoCOT_Dataset are comparing it to the libraries listed below
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆59Updated 2 months ago
- ☆46Updated 4 months ago
- ☆29Updated 7 months ago
- ☆69Updated 4 months ago
- ☆24Updated 10 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆128Updated 6 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Official Implementation of ReALFRED (ECCV'24)☆39Updated 6 months ago
- ☆24Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆44Updated 2 months ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆91Updated 2 years ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆55Updated 6 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆110Updated 2 weeks ago
- Official Implementation of CAPEAM (ICCV'23)☆13Updated 4 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆89Updated 2 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆81Updated last month
- Prompter for Embodied Instruction Following☆18Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- ☆68Updated last week
- Latest Advances on Vison-Language-Action Models.☆36Updated last month
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆89Updated 8 months ago
- Official implementation of: Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel☆24Updated 4 months ago
- ☆68Updated 7 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆80Updated 6 months ago
- Official implementation of GR-MG☆78Updated 3 months ago
- The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs…☆32Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆112Updated 4 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆27Updated 10 months ago
- ☆18Updated 10 months ago
- ☆62Updated 2 months ago