Ram81 / seeing-unseenLinks
☆16Updated last year
Alternatives and similar repositories for seeing-unseen
Users that are interested in seeing-unseen are comparing it to the libraries listed below
Sorting:
- ☆45Updated last year
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆46Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆44Updated 2 years ago
- Python package for importing and loading external assets into AI2THOR☆27Updated last week
- ☆80Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆36Updated last month
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆52Updated 4 months ago
- Official PyTorch implementation of Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow☆44Updated last year
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆56Updated 5 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆94Updated last year
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆44Updated 3 months ago
- Code repository for the Habitat Synthetic Scenes Dataset (HSSD) paper.☆105Updated last year
- Code for Stable Control Representations☆26Updated 6 months ago
- ☆36Updated 3 months ago
- [CoRL 2023 Oral] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields☆135Updated last year
- ☆58Updated 10 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated 2 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆59Updated 3 years ago
- [CoRL 2022] This repository contains code for generating relevancies, training, and evaluating Semantic Abstraction.☆115Updated 2 years ago
- main augmentation script for real world robot dataset.☆35Updated 2 years ago
- ☆77Updated 4 months ago
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆86Updated last year
- [CVPR 2025] Official implementation of "GenManip: LLM-driven Simulation for Generalizable Instruction-Following Manipulation"☆70Updated 2 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆63Updated 9 months ago
- (Incomplete version) This is an implementation of affordancellm.☆14Updated last year
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆75Updated 10 months ago
- Code for the RSS 2023 paper "Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement"☆20Updated 2 years ago