alexpashevich / E.T.View external linksLinks
Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
☆93Jul 11, 2023Updated 2 years ago
Alternatives and similar repositories for E.T.
Users that are interested in E.T. are comparing it to the libraries listed below
Sorting:
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Jun 28, 2021Updated 4 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Apr 9, 2023Updated 2 years ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆487Feb 5, 2026Updated last week
- 3D household task-based dataset created using customised AI2-THOR.☆14Apr 14, 2022Updated 3 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆40Jun 21, 2024Updated last year
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆142May 6, 2024Updated last year
- Repository for DialFRED.☆46Sep 14, 2023Updated 2 years ago
- Code for EMNLP 2022 Paper DANLI: Deliberative Agent for Following Natural Language Instructions☆18May 1, 2025Updated 9 months ago
- ☆45Jun 24, 2022Updated 3 years ago
- Code for EmBERT, a transformer model for embodied, language-guided visual task completion.☆59Apr 10, 2024Updated last year
- Codebase for the Airbert paper☆47Mar 20, 2023Updated 2 years ago
- Prompter for Embodied Instruction Following☆18Nov 30, 2023Updated 2 years ago
- Learning about objects and their properties by interacting with them☆12Oct 21, 2020Updated 5 years ago
- A mini-framework for running AI2-Thor with Docker.☆37Apr 26, 2024Updated last year
- ☆33Sep 22, 2024Updated last year
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆59Oct 7, 2022Updated 3 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆278May 16, 2022Updated 3 years ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆40Nov 21, 2023Updated 2 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆201Aug 13, 2022Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆143Jun 14, 2023Updated 2 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Jul 27, 2022Updated 3 years ago
- SNARE Dataset with MATCH and LaGOR models☆23Mar 27, 2024Updated last year
- code for the paper "Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation" (TPAMI 2021)☆10Jul 15, 2022Updated 3 years ago
- 🔀 Visual Room Rearrangement☆125Aug 15, 2023Updated 2 years ago
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Apr 23, 2023Updated 2 years ago
- Official codebase for EmbCLIP☆131Jun 16, 2023Updated 2 years ago
- The ProcTHOR-10K Houses Dataset☆118Dec 14, 2022Updated 3 years ago
- An open source framework for research in Embodied-AI from AI2.☆376Aug 22, 2025Updated 5 months ago
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆48Dec 8, 2022Updated 3 years ago
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Oct 3, 2023Updated 2 years ago
- This repository is the official implementation of *Silver-Bullet-3D* Solution for SAPIEN ManiSkill Challenge 2021☆20Jan 19, 2022Updated 4 years ago
- ☆17Mar 26, 2021Updated 4 years ago
- code for the paper "ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts" (CVPR 2022)☆10Jul 17, 2022Updated 3 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆30Aug 2, 2022Updated 3 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆235Apr 17, 2024Updated last year
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learning☆640Updated this week
- ☆61Jul 25, 2023Updated 2 years ago
- ReaSCAN is a synthetic navigation task that requires models to reason about surroundings over syntactically difficult languages. (NeurIPS…☆19Nov 28, 2021Updated 4 years ago
- A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future…☆591May 2, 2024Updated last year