allenai / embodied-clipLinks
Official codebase for EmbCLIP
☆131Updated 2 years ago
Alternatives and similar repositories for embodied-clip
Users that are interested in embodied-clip are comparing it to the libraries listed below
Sorting:
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Updated 2 years ago
- 🔀 Visual Room Rearrangement☆123Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Updated 2 years ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆40Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆37Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 8 months ago
- ☆45Updated 3 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated 9 months ago
- Prompter for Embodied Instruction Following☆18Updated 2 years ago
- Masked Visual Pre-training for Robotics☆243Updated 2 years ago
- Masked World Models for Visual Control☆132Updated 2 years ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆104Updated 3 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆55Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆141Updated 2 years ago
- The ProcTHOR-10K Houses Dataset☆115Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆87Updated last year
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 4 years ago
- PyTorch implementation of the Hiveformer research paper☆49Updated 2 years ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆59Updated 3 years ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"☆177Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆55Updated 5 years ago
- Codebase for the Airbert paper☆47Updated 2 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆201Updated 3 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆40Updated last year
- Hierarchical Universal Language Conditioned Policies☆76Updated last year
- [ICCV 2023] Official code repository for ARNOLD benchmark☆179Updated 10 months ago
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigation☆17Updated 3 years ago
- ☆47Updated 2 years ago
- Official Implementation of ReALFRED (ECCV'24)☆44Updated last year