TencentARC / MotoLinks
[ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos
☆159Updated 3 months ago
Alternatives and similar repositories for Moto
Users that are interested in Moto are comparing it to the libraries listed below
Sorting:
- ☆62Updated last year
- ICCV2025☆146Updated last month
- Unified Vision-Language-Action Model☆260Updated 3 months ago
- ☆138Updated 6 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆162Updated 3 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆114Updated 9 months ago
- ☆89Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆314Updated 5 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆117Updated 3 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆117Updated 4 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆108Updated 2 months ago
- Official implementation of GR-MG☆93Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆203Updated 4 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆155Updated 9 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆335Updated last week
- [AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆77Updated this week
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆90Updated 3 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆95Updated 6 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆246Updated last year
- VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos☆248Updated 3 weeks ago
- ☆64Updated 10 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆146Updated last year
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035…☆99Updated 5 months ago
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆250Updated last month
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆190Updated 7 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆206Updated 7 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆47Updated last month
- Implementation of VLM4VLA☆33Updated this week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆130Updated 4 months ago