Ram81 / seeing-unseen
☆14Updated 11 months ago
Alternatives and similar repositories for seeing-unseen
Users that are interested in seeing-unseen are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last week
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning☆18Updated 4 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆37Updated 2 years ago
- ☆42Updated last year
- ☆71Updated 8 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆30Updated 4 months ago
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆44Updated 10 months ago
- main augmentation script for real world robot dataset.☆35Updated last year
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆38Updated 9 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆85Updated 9 months ago
- Code for the RSS 2023 paper "Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement"☆19Updated last year
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆29Updated 7 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆53Updated 4 months ago
- [ECCV 2024] 🎉 Official repository of "Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipu…☆80Updated 5 months ago
- Mirage: a zero-shot cross-embodiment policy transfer method. Benchmarking code for cross-embodiment policy transfer.☆21Updated last year
- Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty☆20Updated last year
- ☆46Updated 4 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆62Updated 5 months ago
- ☆14Updated last month
- Python package for importing and loading external assets into AI2THOR☆21Updated 7 months ago
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆33Updated 11 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Official PyTorch implementation of Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow☆44Updated last year
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆77Updated 6 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆63Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆78Updated 9 months ago
- Code repository for the Habitat Synthetic Scenes Dataset (HSSD) paper.☆88Updated 11 months ago
- Official implementation of the NRNS paper☆36Updated 2 years ago