martius-lab / videosaurLinks
Repository for our paper "Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities"
☆33Updated 11 months ago
Alternatives and similar repositories for videosaur
Users that are interested in videosaur are comparing it to the libraries listed below
Sorting:
- ☆88Updated 5 months ago
- [NeurIPS 2023] Self-supervised Object-Centric Learning for Videos☆32Updated last year
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆22Updated 10 months ago
- ☆24Updated 6 months ago
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆26Updated last year
- Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models☆119Updated 2 years ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- ☆46Updated last year
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated 2 years ago
- This repository is the official implementation of Improving Object-centric Learning With Query Optimization☆51Updated 2 years ago
- [CVPR 2024 Highlight] SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers☆73Updated last year
- [ICLR 2023 - UNOFFICIAL] Bridging the Gap to Real-World Object-Centric Learning☆23Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆72Updated last year
- [WIP] Code for LangToMo☆20Updated 7 months ago
- ☆62Updated last year
- Official implementation of: "Object-Centric Video Prediction via Decoupling of Object Dynamics and Interactions" by Villar-Corrales et al…☆23Updated 2 years ago
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆31Updated 10 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆63Updated last year
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆37Updated 8 months ago
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆45Updated 7 months ago
- ☆44Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆69Updated last year
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆42Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆59Updated 8 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆265Updated 2 months ago
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆82Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- ☆33Updated last year