chengaopro / Awesome-EmbodiedAILinks
A curated list about Awesome Embodied AI works and is still in construct. Now it contains a list of Simulators, Tasks and Datasets.
β30Updated 5 years ago
Alternatives and similar repositories for Awesome-EmbodiedAI
Users that are interested in Awesome-EmbodiedAI are comparing it to the libraries listed below
Sorting:
- Official codebase for EmbCLIPβ130Updated 2 years ago
- π Visual Room Rearrangementβ122Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"β96Updated 4 months ago
- A Model for Embodied Adaptive Object Detectionβ46Updated 3 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Dataβ45Updated last year
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)β37Updated 2 years ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaborationβ98Updated 3 years ago
- β13Updated 2 years ago
- [ICCV 2023] Official code repository for ARNOLD benchmarkβ174Updated 6 months ago
- β44Updated 3 years ago
- This repository is the official implementation of *Silver-Bullet-3D* Solution for SAPIEN ManiSkill Challenge 2021β20Updated 3 years ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β44Updated last year
- Learning about objects and their properties by interacting with themβ12Updated 4 years ago
- β55Updated 9 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)β44Updated last year
- β16Updated last year
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Follβ¦β38Updated last year
- β54Updated last year
- General-purpose Visual Understanding Evaluationβ20Updated last year
- β25Updated 3 years ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D Worldβ132Updated 11 months ago
- Official Implementation of CAPEAM (ICCV'23)β13Updated 9 months ago
- Masked Visual Pre-training for Roboticsβ241Updated 2 years ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videosβ154Updated 3 weeks ago
- ObjectFolder Datasetβ165Updated 3 years ago
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ191Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videosβ71Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022β68Updated 10 months ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL reposβ66Updated last year
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal traβ¦β90Updated 2 years ago