Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods
☆127Apr 9, 2023Updated 2 years ago
Alternatives and similar repositories for FILM
Users that are interested in FILM are comparing it to the libraries listed below
Sorting:
- Prompter for Embodied Instruction Following☆18Nov 30, 2023Updated 2 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Jun 28, 2021Updated 4 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Jul 11, 2023Updated 2 years ago
- ☆45Jun 24, 2022Updated 3 years ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆487Feb 5, 2026Updated 3 weeks ago
- Official Implementation of CAPEAM (ICCV'23)☆16Nov 30, 2024Updated last year
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆143May 6, 2024Updated last year
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆40Jun 21, 2024Updated last year
- Repository for DialFRED.☆45Sep 14, 2023Updated 2 years ago
- ☆26Oct 28, 2022Updated 3 years ago
- Code for EMNLP 2022 Paper DANLI: Deliberative Agent for Following Natural Language Instructions☆18May 1, 2025Updated 10 months ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Jul 27, 2022Updated 3 years ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Apr 14, 2022Updated 3 years ago
- [ICML 2024] RoboMP2: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models☆12Jun 30, 2025Updated 8 months ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆278May 16, 2022Updated 3 years ago
- Official codebase for EmbCLIP☆130Jun 16, 2023Updated 2 years ago
- Official Implementation of ReALFRED (ECCV'24)☆44Oct 11, 2024Updated last year
- A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future…☆592May 2, 2024Updated last year
- Code for EmBERT, a transformer model for embodied, language-guided visual task completion.☆60Apr 10, 2024Updated last year
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆255Jun 27, 2023Updated 2 years ago
- The ProcTHOR-10K Houses Dataset☆119Dec 14, 2022Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆88Jun 27, 2024Updated last year
- Modular and simple vision language navigation framework☆12Aug 16, 2021Updated 4 years ago
- Masked Visual Pre-training for Robotics☆245Apr 1, 2023Updated 2 years ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆40Nov 21, 2023Updated 2 years ago
- ☆17Mar 26, 2021Updated 4 years ago
- ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm☆97Feb 7, 2023Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆143Jun 14, 2023Updated 2 years ago
- 🔀 Visual Room Rearrangement☆126Aug 15, 2023Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98May 8, 2025Updated 9 months ago
- An open source framework for research in Embodied-AI from AI2.☆378Aug 22, 2025Updated 6 months ago
- A visual semantic planner for the ALFRED virtual agent challenge using the GPT-2 language model☆16Oct 1, 2020Updated 5 years ago
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigation☆17Nov 1, 2022Updated 3 years ago
- ☆61Jul 25, 2023Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆37Apr 26, 2024Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆216Mar 26, 2025Updated 11 months ago
- ☆15Aug 9, 2021Updated 4 years ago
- ☆38Mar 10, 2022Updated 3 years ago
- Official implementation of paper "Data-Agnostic Robotic Long-Horizon Manipulation with Vision-Language-Conditioned Closed-Loop Feedback"☆18Apr 10, 2025Updated 10 months ago