allenai / embodied-clipLinks
Official codebase for EmbCLIP
β126Updated 2 years ago
Alternatives and similar repositories for embodied-clip
Users that are interested in embodied-clip are comparing it to the libraries listed below
Sorting:
- π Visual Room Rearrangementβ118Updated last year
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal traβ¦β90Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"β93Updated 2 months ago
- Code for training embodied agents using imitation learning at scale in Habitat-Labβ42Updated 3 months ago
- A mini-framework for running AI2-Thor with Docker.β35Updated last year
- Masked Visual Pre-training for Roboticsβ234Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methodsβ124Updated 2 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligenceβ31Updated 3 years ago
- β44Updated 3 years ago
- π A Python Package for Seamless Data Distribution in AI Workflowsβ22Updated last year
- PyTorch implementation of the Hiveformer research paperβ48Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"β79Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaborationβ96Updated 3 years ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priorsβ38Updated last year
- Prompter for Embodied Instruction Followingβ18Updated last year
- Masked World Models for Visual Controlβ126Updated 2 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat simβ¦β57Updated 3 years ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"β162Updated last year
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).β123Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Dataβ43Updated last year
- The ProcTHOR-10K Houses Datasetβ105Updated 2 years ago
- Hierarchical Universal Language Conditioned Policiesβ73Updated last year
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Follβ¦β38Updated last year
- [ICCV 2023] Official code repository for ARNOLD benchmarkβ169Updated 4 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.β220Updated last year
- β49Updated last year
- β34Updated last year
- β42Updated last year
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigationββ52Updated 4 years ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 β¦β31Updated last year