google-research-datasets / RxR
Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It contains 126k navigation instructions in English, Hindi and Telugu, and 126k navigation following demonstrations. Both annotation types include dense spatiotemporal alignments between the text and the visual per…
☆128Updated last year
Alternatives and similar repositories for RxR:
Users that are interested in RxR are comparing it to the libraries listed below
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆161Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆119Updated last year
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆73Updated 7 months ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆192Updated 9 months ago
- ☆43Updated 2 years ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆344Updated last month
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆44Updated 3 years ago
- Vision and Language Agent Navigation☆74Updated 4 years ago
- 🔀 Visual Room Rearrangement☆106Updated last year
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆52Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆48Updated 4 years ago
- Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.☆37Updated 4 years ago
- Cooperative Vision-and-Dialog Navigation☆68Updated 2 years ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆100Updated last year
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆183Updated last year
- ☆46Updated 2 years ago
- An open source framework for research in Embodied-AI from AI2.☆326Updated last month
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆109Updated last year
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 7 months ago
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆21Updated last year
- ☆33Updated last year
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆66Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆118Updated last year
- large scale pretrain for navigation task☆89Updated last year
- Official codebase for EmbCLIP☆117Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆31Updated 2 years ago
- Codebase for the Airbert paper☆43Updated last year
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆117Updated last year
- ☆60Updated 2 years ago