yunyikristy / DualMindLinks
☆17Updated last year
Alternatives and similar repositories for DualMind
Users that are interested in DualMind are comparing it to the libraries listed below
Sorting:
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 3 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆81Updated 3 months ago
- ☆39Updated last year
- Official codebase for EmbCLIP☆130Updated 2 years ago
- ☆42Updated last year
- ☆45Updated last year
- ☆72Updated 10 months ago
- ☆47Updated last year
- Using advances in generative modeling to learn reward functions from unlabeled videos.☆134Updated last year
- Codebase for HiP☆90Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆118Updated last year
- Prompter for Embodied Instruction Following☆18Updated last year
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- Codebase for PRISE: Learning Temporal Action Abstractions as a Sequence Compression Problem☆24Updated last year
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆27Updated 5 months ago
- Hierarchical Universal Language Conditioned Policies☆75Updated last year
- Instruction Following Agents with Multimodal Transforemrs☆53Updated 2 years ago
- ☆39Updated 3 years ago
- Official code for "QueST: Self-Supervised Skill Abstractions for Continuous Control" [NeurIPS 2024]☆100Updated 9 months ago
- ☆29Updated last year
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆39Updated 5 months ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"☆171Updated last year
- ☆207Updated last year
- ☆34Updated last year
- [NeurIPS 2024] GenRL: Multimodal-foundation world models enable grounding language and video prompts into embodied domains, by turning th…☆80Updated 4 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆226Updated last year
- Official Implementation of CAPEAM (ICCV'23)☆13Updated 9 months ago
- ☆44Updated last year
- [ICCV 2023] Official code repository for ARNOLD benchmark☆173Updated 5 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆138Updated 4 months ago