snumprlab / capeamLinks
Official Implementation of CAPEAM (ICCV'23)
☆13Updated 10 months ago
Alternatives and similar repositories for capeam
Users that are interested in capeam are comparing it to the libraries listed below
Sorting:
- Prompter for Embodied Instruction Following☆18Updated last year
- ☆57Updated 10 months ago
- Official Implementation of ReALFRED (ECCV'24)☆43Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆135Updated 2 weeks ago
- ☆80Updated last year
- [ICML 2024] RoboMP2: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models☆11Updated 3 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆36Updated last year
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 8 months ago
- ☆30Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 8 months ago
- ☆32Updated last year
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆42Updated 7 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 5 months ago
- ☆54Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 5 months ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆58Updated last year
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆80Updated 4 months ago
- Official codebase for EmbCLIP☆132Updated 2 years ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆106Updated 6 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated last year
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆194Updated 3 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆143Updated 6 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- ☆33Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆166Updated last month
- Efficiently apply modification functions to RLDS/TFDS datasets.☆34Updated last year
- Official Implementation of CL-ALFRED (ICLR'24)☆26Updated 11 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆63Updated 9 months ago
- ☆44Updated 3 years ago