allenai / ai2thor-rearrangementLinks
π Visual Room Rearrangement
β123Updated 2 years ago
Alternatives and similar repositories for ai2thor-rearrangement
Users that are interested in ai2thor-rearrangement are comparing it to the libraries listed below
Sorting:
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priorsβ40Updated 2 years ago
- Official codebase for EmbCLIPβ131Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"β98Updated 8 months ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaborationβ104Updated 3 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.β55Updated 3 years ago
- The ProcTHOR-10K Houses Datasetβ115Updated 3 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal traβ¦β93Updated 2 years ago
- Masked Visual Pre-training for Roboticsβ243Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Labβ42Updated 9 months ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL reposβ69Updated last year
- β45Updated 3 years ago
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)β38Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methodsβ127Updated 2 years ago
- [ICCV 2023] Official code repository for ARNOLD benchmarkβ179Updated 10 months ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"β87Updated last year
- PyTorch implementation of the Hiveformer research paperβ49Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Dataβ45Updated 2 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat simβ¦β59Updated 3 years ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"β177Updated 2 years ago
- Official code for the paper "Housekeep: Tidying Virtual Households using Commonsense Reasoning" published at ECCV, 2022β52Updated 2 years ago
- π A Python Package for Seamless Data Distribution in AI Workflowsβ25Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.β37Updated last year
- Hierarchical Universal Language Conditioned Policiesβ76Updated last year
- Voltron: Language-Driven Representation Learning for Roboticsβ233Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Follβ¦β40Updated last year
- RoboTHOR Challengeβ97Updated 4 years ago
- Teaching robots to respond to open-vocab queries with CLIP and NeRF-like neural fieldsβ179Updated last year
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigationββ55Updated 5 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videosβ71Updated last year
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correctionβ101Updated last year