gistvision / moca
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
☆37Updated 9 months ago
Alternatives and similar repositories for moca:
Users that are interested in moca are comparing it to the libraries listed below
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 3 years ago
- SNARE Dataset with MATCH and LaGOR models☆24Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆90Updated 2 years ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 2 years ago
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆21Updated last year
- Official Repository of NeurIPS2021 paper: PTR☆33Updated 3 years ago
- This repository is the official implementation of *Silver-Bullet-3D* Solution for SAPIEN ManiSkill Challenge 2021☆20Updated 3 years ago
- Visual Grounding of Referring Expressions for Human-Robot Interaction☆26Updated 6 years ago
- A mini-framework for running AI2-Thor with Docker.☆33Updated 11 months ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆45Updated 3 years ago
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆38Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆78Updated 9 months ago
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)☆36Updated last year
- Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"☆25Updated 4 years ago
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆44Updated 2 years ago
- ☆44Updated 2 years ago
- Official codebase for EmbCLIP☆120Updated last year
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆37Updated last year
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆118Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆31Updated 2 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 3 years ago
- ☆22Updated 3 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆31Updated last year
- Learning about objects and their properties by interacting with them☆12Updated 4 years ago
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆14Updated 11 months ago
- Code for MM 22 "Target-Driven Structured Transformer Planner for Vision-Language Navigation"☆15Updated 2 years ago
- Codebase for the Airbert paper☆45Updated 2 years ago
- 📎 + 🦾 CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision☆14Updated 4 months ago
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆42Updated 2 years ago