soyeonm / FILM
Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods
☆122Updated 2 years ago
Alternatives and similar repositories for FILM:
Users that are interested in FILM are comparing it to the libraries listed below
- ☆44Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆45Updated 2 years ago
- Official codebase for EmbCLIP☆122Updated last year
- Prompter for Embodied Instruction Following☆18Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆91Updated 2 years ago
- 🔀 Visual Room Rearrangement☆113Updated last year
- Repository for DialFRED.☆42Updated last year
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆138Updated 11 months ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆414Updated 9 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆27Updated 10 months ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆97Updated 2 years ago
- ☆24Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆33Updated 11 months ago
- Official Implementation of ReALFRED (ECCV'24)☆39Updated 6 months ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆40Updated last week
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆78Updated 9 months ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 10 months ago
- Voltron: Language-Driven Representation Learning for Robotics☆220Updated last year
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆49Updated 4 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆123Updated last year
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 2 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 3 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆273Updated 2 years ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆66Updated last week
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆61Updated last year
- Masked Visual Pre-training for Robotics☆230Updated 2 years ago
- Codebase for the Airbert paper☆45Updated 2 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆173Updated 2 years ago