alexpashevich / E.T.
Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
☆86Updated last year
Related projects ⓘ
Alternatives and complementary repositories for E.T.
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 4 months ago
- ☆40Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆114Updated last year
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆42Updated 3 years ago
- A mini-framework for running AI2-Thor with Docker.☆30Updated 6 months ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- Official codebase for EmbCLIP☆113Updated last year
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆151Updated 2 years ago
- 🔀 Visual Room Rearrangement☆105Updated last year
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆99Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆80Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆89Updated 2 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆28Updated 2 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆52Updated 2 years ago
- Prompter for Embodied Instruction Following☆17Updated 11 months ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆110Updated last year
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆21Updated 11 months ago
- Repository for DialFRED.☆41Updated last year
- Codebase for the Airbert paper☆42Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆66Updated 4 months ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 2 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆12Updated 2 years ago
- ☆24Updated 2 years ago
- Cooperative Vision-and-Dialog Navigation☆66Updated last year
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆40Updated 2 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆42Updated 2 years ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆113Updated last year
- ☆22Updated 2 years ago
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆13Updated 7 months ago