alexa / teach
TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.
☆132Updated 4 months ago
Related projects: ⓘ
- ☆23Updated last year
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆113Updated last year
- Repository for DialFRED.☆40Updated last year
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆83Updated last year
- ☆39Updated 2 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆116Updated 2 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆241Updated 2 years ago
- PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]☆54Updated 2 years ago
- Grounded SCAN data set.☆69Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆29Updated 4 months ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆83Updated 2 years ago
- Code for EMNLP 2022 Paper DANLI: Deliberative Agent for Following Natural Language Instructions☆18Updated last year
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆19Updated 9 months ago
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆113Updated last year
- Code and data for "Inferring Rewards from Language in Context" [ACL 2022].☆15Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 2 months ago
- Prompter for Embodied Instruction Following☆15Updated 9 months ago
- 🔀 Visual Room Rearrangement☆104Updated last year
- Code for EmBERT, a transformer model for embodied, language-guided visual task completion.☆57Updated 5 months ago
- ☆101Updated last week
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆27Updated 2 years ago
- Vision and Language Agent Navigation☆71Updated 3 years ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆362Updated last month
- 🚀 Run AI2-THOR with Google Colab☆19Updated 2 years ago
- Cornell Instruction Following Framework☆33Updated 2 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆42Updated 2 years ago
- ☆102Updated 2 months ago
- Official codebase for EmbCLIP☆111Updated last year
- Instruction Following Agents with Multimodal Transforemrs☆50Updated last year