CVMI-Lab / SlotMIMLinks
(CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
☆22Updated 9 months ago
Alternatives and similar repositories for SlotMIM
Users that are interested in SlotMIM are comparing it to the libraries listed below
Sorting:
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆32Updated 6 months ago
- ☆60Updated 11 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆80Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆57Updated 7 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆99Updated last year
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆67Updated 11 months ago
- [WIP] Code for LangToMo☆20Updated 5 months ago
- ☆88Updated last year
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆46Updated last week
- ☆23Updated last month
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆111Updated 7 months ago
- ☆135Updated 5 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆40Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆151Updated 2 months ago
- List of papers on video-centric robot learning☆22Updated last year
- ☆41Updated 5 months ago
- ☆51Updated 7 months ago
- Official Repository for SAM2Act☆215Updated 3 months ago
- ICCV2025☆143Updated 3 weeks ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆93Updated 5 months ago
- ☆33Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆45Updated 2 years ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆89Updated last year
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆101Updated 3 weeks ago
- ☆47Updated last year
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆45Updated 5 months ago
- Official implementation of GR-MG☆92Updated 11 months ago