GLAMOR-USC / teach_tatcLinks
☆26Updated 2 years ago
Alternatives and similar repositories for teach_tatc
Users that are interested in teach_tatc are comparing it to the libraries listed below
Sorting:
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆140Updated last year
- Repository for DialFRED.☆43Updated last year
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated 2 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆124Updated 2 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 4 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆274Updated 3 years ago
- A mini-framework for running AI2-Thor with Docker.☆35Updated last year
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆22Updated last year
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆435Updated 2 months ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆38Updated last year
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆127Updated 3 years ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆96Updated 3 years ago
- Official codebase for EmbCLIP☆126Updated 2 years ago
- ☆44Updated 3 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆93Updated 2 months ago
- 🔀 Visual Room Rearrangement☆118Updated last year
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 3 years ago
- Cooperative Vision-and-Dialog Navigation☆71Updated 2 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆56Updated 2 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 2 years ago
- ☆131Updated last year
- Prompter for Embodied Instruction Following☆18Updated last year
- Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"☆25Updated 4 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 3 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆218Updated last year
- Vision and Language Agent Navigation☆80Updated 4 years ago
- Official code for the paper "Housekeep: Tidying Virtual Households using Commonsense Reasoning" published at ECCV, 2022☆51Updated 2 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆33Updated 2 years ago
- Cornell Instruction Following Framework☆34Updated 3 years ago
- Code for EMNLP 2022 Paper DANLI: Deliberative Agent for Following Natural Language Instructions☆19Updated 2 months ago