alexpashevich / E.T.Links
Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
☆93Updated 2 years ago
Alternatives and similar repositories for E.T.
Users that are interested in E.T. are comparing it to the libraries listed below
Sorting:
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Updated 2 years ago
- Official codebase for EmbCLIP☆131Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆40Updated last year
- ☆45Updated 3 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 4 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆59Updated 3 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 4 years ago
- Codebase for the Airbert paper☆47Updated 2 years ago
- 🔀 Visual Room Rearrangement☆123Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆37Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆105Updated 3 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆53Updated 4 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 3 years ago
- Cooperative Vision-and-Dialog Navigation☆72Updated 3 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆147Updated 2 years ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 3 years ago
- ☆25Updated 3 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 8 months ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆144Updated 4 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆55Updated 3 years ago
- Repository for DialFRED.☆46Updated 2 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆201Updated 3 years ago
- ☆23Updated 4 years ago
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆142Updated last year
- A curated list of research papers in Vision-Language Navigation (VLN)☆232Updated last year
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆484Updated 8 months ago
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆25Updated 2 years ago
- Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"☆26Updated 4 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆36Updated 2 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆30Updated 3 years ago