H-Freax / Awesome-Video-Robotic-PapersLinks
This repository compiles a list of papers related to the application of video technology in the field of robotics! Starβ the repo and follow me if you like what you seeπ€©.
β169Updated 10 months ago
Alternatives and similar repositories for Awesome-Video-Robotic-Papers
Users that are interested in Awesome-Video-Robotic-Papers are comparing it to the libraries listed below
Sorting:
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ152Updated 8 months ago
- Code for subgoal synthesis via image editingβ145Updated 2 years ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ225Updated 8 months ago
- Official Repository for MolmoActβ267Updated this week
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"β196Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β339Updated last month
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β334Updated 8 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β304Updated 8 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videosβ186Updated 3 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β303Updated 4 months ago