LostXine / LLaRALinks
[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
☆225Updated 6 months ago
Alternatives and similar repositories for LLaRA
Users that are interested in LLaRA are comparing it to the libraries listed below
Sorting:
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆143Updated 6 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆385Updated 8 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆253Updated 6 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆106Updated 6 months ago
- Code for subgoal synthesis via image editing☆143Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆269Updated 7 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆312Updated 6 months ago
- ☆221Updated last year
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆195Updated 3 months ago
- Official Repository for MolmoAct☆212Updated last month
- Embodied Reasoning Question Answer (ERQA) Benchmark☆229Updated 7 months ago
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆167Updated 8 months ago
- ☆58Updated 10 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆276Updated 2 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆95Updated 7 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆232Updated last year
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆111Updated 8 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆185Updated 3 weeks ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆166Updated last month
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World