google-research-datasets / RxRLinks
Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It contains 126k navigation instructions in English, Hindi and Telugu, and 126k navigation following demonstrations. Both annotation types include dense spatiotemporal alignments between the text and the visual per…
☆158Updated 2 years ago
Alternatives and similar repositories for RxR
Users that are interested in RxR are comparing it to the libraries listed below
Sorting:
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆182Updated 2 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆218Updated last year
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆82Updated last year
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆135Updated last year
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆49Updated 3 years ago
- ☆45Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆125Updated 2 years ago
- 🔀 Visual Room Rearrangement☆121Updated last year
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆57Updated 3 years ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆519Updated 7 months ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆53Updated 4 years ago
- Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.☆44Updated 4 years ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆80Updated 2 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆56Updated 2 years ago
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆245Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆43Updated 3 months ago
- ☆33Updated last year
- Codebase for the Airbert paper☆46Updated 2 years ago
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆60Updated 7 months ago
- Teaching robots to respond to open-vocab queries with CLIP and NeRF-like neural fields☆174Updated last year
- Official codebase for EmbCLIP☆129Updated 2 years ago
- The ProcTHOR-10K Houses Dataset☆108Updated 2 years ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆198Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆91Updated 2 years ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆128Updated last year
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆41Updated 2 years ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆139Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 3 years ago
- ☆60Updated 3 years ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆199Updated last year