LostXine / LLaRALinks
[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
☆225Updated 7 months ago
Alternatives and similar repositories for LLaRA
Users that are interested in LLaRA are comparing it to the libraries listed below
Sorting:
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆257Updated this week
- ☆233Updated last year
- Official Repository for MolmoAct☆244Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆145Updated 7 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆280Updated 7 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆109Updated 6 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆238Updated 7 months ago
- Code for subgoal synthesis via image editing☆143Updated 2 years ago
- ☆60Updated 10 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆320Updated 7 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆398Updated 9 months ago
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆168Updated 9 months ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆198Updated 3 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆114Updated 8 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆209Updated last month
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆281Updated 3 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆100Updated 7 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆143Updated last year
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆138Updated 10 months ago
- Unified Vision-Language-Action Model☆223Updated 3 weeks ago
- Official Repository for SAM2Act☆211Updated 2 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆228Updated last week
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆233Updated last year
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆358Updated 2 weeks ago
- ☆297Updated 7 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆285Updated last year
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆156Updated last year
- Latest Advances on Vison-Language-Action Models.☆117Updated 8 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆194Updated 5 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆176Updated 2 months ago