askforalfred / alfred
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
☆362Updated last month
Related projects: ⓘ
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆113Updated last year
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆241Updated 2 years ago
- An open source framework for research in Embodied-AI from AI2.☆314Updated last week
- API to run VirtualHome, a Multi-Agent Household Simulator☆452Updated last month
- Voltron: Language-Driven Representation Learning for Robotics☆197Updated last year
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆359Updated 2 weeks ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆83Updated last year
- Pre-training Reusable Representations for Robotic Manipulation Using Diverse Human Video Data☆283Updated last year
- Vision-and-Language Navigation in Continuous Environments using Habitat☆261Updated 9 months ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆257Updated 2 weeks ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆180Updated 5 months ago
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆270Updated 11 months ago
- 🔀 Visual Room Rearrangement☆104Updated last year
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆132Updated 4 months ago
- OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine. Join our Discord for support: https://…☆443Updated this week
- Masked Visual Pre-training for Robotics☆214Updated last year
- CLIPort: What and Where Pathways for Robotic Manipulation☆448Updated 10 months ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆42Updated 2 years ago
- Reading list for research topics in embodied vision☆495Updated last month
- Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation☆342Updated 4 months ago
- Official codebase for EmbCLIP☆111Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆83Updated 2 years ago
- The repository for the largest and most comprehensive empirical study of visual foundation models for Embodied AI (EAI).☆455Updated 4 months ago
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆116Updated 2 years ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆251Updated last week
- ☆23Updated last year
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆199Updated 3 weeks ago
- ☆39Updated 2 years ago
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆140Updated last year
- ProgPrompt for Virtualhome☆107Updated last year