LostXine / LLaRALinks
[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
☆224Updated 5 months ago
Alternatives and similar repositories for LLaRA
Users that are interested in LLaRA are comparing it to the libraries listed below
Sorting:
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆105Updated 5 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆139Updated 5 months ago
- ☆55Updated 9 months ago
- Code for subgoal synthesis via image editing☆142Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆260Updated 6 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆252Updated 5 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆142Updated this week
- Official Repository for MolmoAct☆193Updated 2 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆375Updated 8 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆212Updated 6 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆93Updated 6 months ago
- ☆215Updated last year
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆306Updated 5 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆143Updated last year
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆191Updated 2 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆154Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆111Updated 7 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆267Updated 2 months ago
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World☆81Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆183Updated 3 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆152Updated 11 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆134Updated 9 months ago
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆166Updated 7 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆119Updated 11 months ago
- Unified Vision-Language-Action Model☆193Updated 2 months ago
- ICCV2025☆133Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆279Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆123Updated last year
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆137Updated 10 months ago