fkenghagho / RobotVQALinks
RobotVQA is a project that develops a Deep Learning-based Cognitive Vision System to support household robots' perception while they perfom human-scale daily manipulation tasks like cooking in a normal kitchen. The system relies on dense description of objects in the scene and their relationships
☆17Updated 11 months ago
Alternatives and similar repositories for RobotVQA
Users that are interested in RobotVQA are comparing it to the libraries listed below
Sorting:
- Instruction Following Agents with Multimodal Transforemrs☆53Updated 2 years ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆38Updated last year
- ☆45Updated last year
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆23Updated last year
- Codebase for HiP☆90Updated last year
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- Task planning over 3D scene graphs☆16Updated 3 years ago
- Code for RRL (https://sites.google.com/view/abstractions4rl)☆27Updated 3 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆94Updated 2 months ago
- Voltron Evaluation: Diverse Evaluation Tasks for Robotic Representation Learning☆36Updated 2 years ago
- Codebase for ICLR 2023 paper, "SMART: Self-supervised Multi-task pretrAining with contRol Transformers"☆53Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆96Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆79Updated last year
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆27Updated 10 months ago
- This code corresponds to transformer training and evaluation code used as part of the OPTIMUS project.☆75Updated last year
- Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers☆59Updated 2 years ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆66Updated last year
- ☆10Updated last year
- NSRM: Neuro-Symbolic Robot Manipulation☆14Updated 2 years ago
- ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm☆94Updated 2 years ago
- simulations used in "Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations"☆28Updated 2 years ago
- Official code for the paper "Housekeep: Tidying Virtual Households using Commonsense Reasoning" published at ECCV, 2022☆51Updated 2 years ago
- Code for Watch and Match: Supercharging Imitation with Regularized Optimal Transport☆80Updated 2 years ago
- ☆35Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆43Updated last year
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆44Updated last year
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆85Updated last year
- (NeurIPS '22) LISA: Learning Interpretable Skill Abstractions - A framework for unsupervised skill learning using Imitation☆31Updated 2 years ago
- MoDem Accelerating Visual Model-Based Reinforcement Learning with Demonstrations☆87Updated 2 years ago