gistvision / mocaLinks
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
☆37Updated last year
Alternatives and similar repositories for moca
Users that are interested in moca are comparing it to the libraries listed below
Sorting:
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 3 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- SNARE Dataset with MATCH and LaGOR models☆24Updated last year
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆22Updated last year
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆44Updated 3 years ago
- Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"☆25Updated 4 years ago
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Updated last year
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆42Updated 3 years ago
- Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019☆10Updated 5 years ago
- Official Repository of NeurIPS2021 paper: PTR☆33Updated 3 years ago
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigation☆15Updated 2 years ago
- Visual Grounding of Referring Expressions for Human-Robot Interaction☆26Updated 6 years ago
- ☆24Updated 3 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆47Updated 3 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆92Updated last month
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆39Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆79Updated 11 months ago
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)☆37Updated last year
- A mini-framework for running AI2-Thor with Docker.☆35Updated last year
- Official codebase for EmbCLIP☆126Updated 2 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆33Updated 2 years ago
- 🔀 Visual Room Rearrangement☆117Updated last year
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆15Updated last year
- ☆44Updated 3 years ago
- 📎 + 🦾 CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision☆15Updated last month
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆56Updated 2 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆61Updated 5 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆124Updated 2 years ago