fkenghagho / RobotVQALinks
RobotVQA is a project that develops a Deep Learning-based Cognitive Vision System to support household robots' perception while they perfom human-scale daily manipulation tasks like cooking in a normal kitchen. The system relies on dense description of objects in the scene and their relationships
☆17Updated last year
Alternatives and similar repositories for RobotVQA
Users that are interested in RobotVQA are comparing it to the libraries listed below
Sorting:
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 3 months ago
- Codebase for HiP☆90Updated last year
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆40Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆66Updated last year
- Voltron Evaluation: Diverse Evaluation Tasks for Robotic Representation Learning☆36Updated 2 years ago
- Instruction Following Agents with Multimodal Transforemrs☆53Updated 2 years ago
- ☆45Updated last year
- 🔀 Visual Room Rearrangement☆121Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆97Updated 3 years ago
- MiniGrid Implementation of BEHAVIOR Tasks☆49Updated last year
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- Official codebase for EmbCLIP☆129Updated 2 years ago
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆23Updated last year
- ☆76Updated 2 months ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆82Updated last year
- ☆44Updated last year
- PyTorch implementation of the Hiveformer research paper☆49Updated 2 years ago
- ☆31Updated 10 months ago
- ☆44Updated 3 years ago
- Official repository for "LIV: Language-Image Representations and Rewards for Robotic Control" (ICML 2023)☆111Updated last year
- Official code for the paper "Housekeep: Tidying Virtual Households using Commonsense Reasoning" published at ECCV, 2022☆52Updated 2 years ago
- Pytorch code for ICRA 2022 Paper StructFormer☆47Updated 3 years ago
- Hierarchical Universal Language Conditioned Policies☆74Updated last year
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆26Updated 11 months ago
- ☆50Updated last year
- General-purpose Visual Understanding Evaluation☆20Updated last year
- This code corresponds to transformer training and evaluation code used as part of the OPTIMUS project.☆80Updated last year
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆97Updated last year