CVMI-Lab / SlotMIM
(CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
☆13Updated 2 months ago
Alternatives and similar repositories for SlotMIM
Users that are interested in SlotMIM are comparing it to the libraries listed below
Sorting:
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆12Updated last week
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- A paper list of world model☆27Updated last month
- FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning☆18Updated 4 months ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆33Updated last month
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated 2 weeks ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆44Updated 10 months ago
- Code for Stable Control Representations☆24Updated last month
- ☆14Updated last month
- Mirage: a zero-shot cross-embodiment policy transfer method. Benchmarking code for cross-embodiment policy transfer.☆21Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆31Updated 4 months ago
- ☆42Updated last year
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆62Updated 5 months ago
- Official Implementation of Learning Navigational Visual Representations with Semantic Map Supervision (ICCV2023)☆25Updated last year
- ☆46Updated 5 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆53Updated 4 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆37Updated 2 years ago
- ☆18Updated 11 months ago
- ☆20Updated 10 months ago
- ☆14Updated 11 months ago
- ☆20Updated 9 months ago
- ☆12Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆76Updated last month
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- ☆72Updated 8 months ago
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆33Updated 11 months ago
- This is the official repo for [CoRL 2024] Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation☆28Updated 6 months ago
- IROS 2024 | PreAfford: Universal Affordance-Based Pre-grasping for Diverse Objects and Scenes☆11Updated 7 months ago
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆19Updated 5 months ago