TencentARC / MotoLinks
[ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos
☆150Updated 2 months ago
Alternatives and similar repositories for Moto
Users that are interested in Moto are comparing it to the libraries listed below
Sorting:
- ☆60Updated 11 months ago
- ICCV2025☆142Updated 3 weeks ago
- Unified Vision-Language-Action Model☆245Updated last month
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆182Updated 3 months ago
- ☆135Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆296Updated 3 weeks ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆158Updated 2 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆298Updated 4 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆80Updated 11 months ago
- ☆87Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆106Updated 3 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆110Updated 7 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆240Updated last year
- Official implementation of GR-MG☆93Updated 10 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆92Updated 5 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆151Updated 8 months ago
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035…☆87Updated 4 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆46Updated last week
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆198Updated this week
- ☆41Updated 5 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆144Updated last year
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆245Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆200Updated 6 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆205Updated 5 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆78Updated 6 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆414Updated 10 months ago
- [AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆63Updated last month
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆96Updated 2 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆130Updated 3 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆86Updated 2 weeks ago