alexa / teach
TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.
☆138Updated 10 months ago
Alternatives and similar repositories for teach:
Users that are interested in teach are comparing it to the libraries listed below
- ☆24Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆118Updated last year
- Repository for DialFRED.☆42Updated last year
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆122Updated 2 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- ☆44Updated 2 years ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆94Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆33Updated 11 months ago
- Code for EmBERT, a transformer model for embodied, language-guided visual task completion.☆57Updated 11 months ago
- Prompter for Embodied Instruction Following☆18Updated last year
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆21Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆90Updated 2 years ago
- Code for EMNLP 2022 Paper DANLI: Deliberative Agent for Following Natural Language Instructions☆19Updated last year
- Cornell Instruction Following Framework☆34Updated 3 years ago
- 🔀 Visual Room Rearrangement☆112Updated last year
- ☆125Updated 8 months ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆271Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 9 months ago
- Vision and Language Agent Navigation☆76Updated 4 years ago
- ☆105Updated 5 months ago
- Official codebase for EmbCLIP☆120Updated last year
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆42Updated 2 years ago
- PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]☆54Updated 3 years ago
- Code and data for "Inferring Rewards from Language in Context" [ACL 2022].☆15Updated 2 years ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 2 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆45Updated 2 years ago
- Official code for our EMNLP2021 Outstanding Paper MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks☆22Updated last year
- ☆17Updated 2 years ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆411Updated 8 months ago