martius-lab / videosaurLinks
Repository for our paper "Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities"
☆32Updated 10 months ago
Alternatives and similar repositories for videosaur
Users that are interested in videosaur are comparing it to the libraries listed below
Sorting:
- ☆87Updated 4 months ago
- [NeurIPS 2023] Self-supervised Object-Centric Learning for Videos☆32Updated last year
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆26Updated last year
- ☆24Updated 5 months ago
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆22Updated 9 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- ☆46Updated last year
- Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models☆118Updated 2 years ago
- [CVPR 2024 Highlight] SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers☆73Updated last year
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆72Updated last year
- [ICLR 2023 - UNOFFICIAL] Bridging the Gap to Real-World Object-Centric Learning☆23Updated last year
- Code for Stable Control Representations☆26Updated 9 months ago
- ☆33Updated last year
- This repository is the official implementation of Improving Object-centric Learning With Query Optimization☆51Updated 2 years ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆71Updated last year
- ☆60Updated last year
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆68Updated last year
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆42Updated last year
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆81Updated last year
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆58Updated 7 months ago
- [WIP] Code for LangToMo☆20Updated 6 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆63Updated 11 months ago
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆30Updated 10 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆58Updated 8 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆265Updated 2 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆73Updated last month
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆34Updated 7 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 3 months ago