allenai / embodied-clipLinks
Official codebase for EmbCLIP
β132Updated 2 years ago
Alternatives and similar repositories for embodied-clip
Users that are interested in embodied-clip are comparing it to the libraries listed below
Sorting:
- π Visual Room Rearrangementβ123Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal traβ¦β92Updated 2 years ago
- β45Updated 3 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methodsβ128Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"β96Updated 6 months ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaborationβ100Updated 3 years ago
- A mini-framework for running AI2-Thor with Docker.β38Updated last year
- Prompter for Embodied Instruction Followingβ18Updated last year
- Masked Visual Pre-training for Roboticsβ242Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Labβ44Updated 7 months ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligenceβ31Updated 4 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat simβ¦β59Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"β88Updated last year
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priorsβ41Updated last year
- Utility functions when working with Ai2-THOR. Try to do one thing once.β54Updated 3 years ago
- PyTorch implementation of the Hiveformer research paperβ49Updated 2 years ago
- Masked World Models for Visual Controlβ131Updated 2 years ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"β172Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Follβ¦β39Updated last year
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).β135Updated 2 years ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 β¦β31Updated last year
- The ProcTHOR-10K Houses Datasetβ113Updated 2 years ago
- [ICCV 2023] Official code repository for ARNOLD benchmarkβ176Updated 8 months ago
- Codebase for the Airbert paperβ46Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Dataβ46Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigationββ55Updated 4 years ago
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigationβ17Updated 3 years ago
- Hierarchical Universal Language Conditioned Policiesβ76Updated last year
- β45Updated 2 years ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL reposβ69Updated last year