MCG-NJU / TPMLinks
[WACV 2025 Oral] Transferring Foundation Models for Generalizable Robotic Manipulation
☆21Updated 2 months ago
Alternatives and similar repositories for TPM
Users that are interested in TPM are comparing it to the libraries listed below
Sorting:
- ☆89Updated 3 weeks ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆33Updated 2 months ago
- ☆46Updated 5 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆89Updated 3 weeks ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆59Updated 9 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆78Updated this week
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆93Updated 9 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆65Updated 3 weeks ago
- ☆48Updated last year
- AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆75Updated 2 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆95Updated 3 months ago
- ☆72Updated 9 months ago
- [AAAI 2023 Oral] CoMAE: Single Model Hybrid Pre-training on Small-Scale RGB-D Datasets☆36Updated 9 months ago
- ☆18Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆43Updated last year
- This the official repository of OCL (ICCV 2023).☆22Updated last year
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆104Updated 6 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆78Updated last month
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆37Updated 3 months ago
- ☆54Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆124Updated 5 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆27Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆51Updated 4 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆114Updated 6 months ago
- Human Demo Videos to Robot Action Plans☆52Updated 6 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆128Updated 7 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆66Updated 5 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆31Updated 5 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last month