H-Freax / Awesome-Video-Robotic-PapersLinks
This repository compiles a list of papers related to the application of video technology in the field of robotics! Starβ the repo and follow me if you like what you seeπ€©.
β167Updated 8 months ago
Alternatives and similar repositories for Awesome-Video-Robotic-Papers
Users that are interested in Awesome-Video-Robotic-Papers are comparing it to the libraries listed below
Sorting:
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ142Updated 6 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β298Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β267Updated 6 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β309Updated 6 months ago
- Code for subgoal synthesis via image editingβ142Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videosβ159Updated last month
- Official Repository for MolmoActβ205Updated 3 weeks ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ225Updated 6 months ago
- β219Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"β192Updated 10 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos