H-Freax / Awesome-Video-Robotic-PapersLinks
This repository compiles a list of papers related to the application of video technology in the field of robotics! Starβ the repo and follow me if you like what you seeπ€©.
β166Updated 7 months ago
Alternatives and similar repositories for Awesome-Video-Robotic-Papers
Users that are interested in Awesome-Video-Robotic-Papers are comparing it to the libraries listed below
Sorting:
- Code for subgoal synthesis via image editingβ141Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ139Updated 5 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β303Updated 5 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β291Updated last month
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ224Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β256Updated 6 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"β187Updated 10 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β266Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ369Updated 7 months ago
- β211Updated last year
- Official Repository for MolmoActβ184Updated last week
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.β228Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extractionβ103Updated 5 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.β143Updated last year
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasetsβ125Updated 3 weeks ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ267Updated 3 weeks ago
- Autoregressive Policy for Robot Learning (RA-L 2025)β137Updated 5 months ago
- Latest Advances on Vison-Language-Action Models.β108Updated 6 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videosβ150Updated 2 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β199Updated 5 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Predictionβ90Updated 6 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ212Updated 6 months ago
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"β353Updated last year
- A Vision-Language Model for Spatial Affordance Prediction in Roboticsβ183Updated 2 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β279Updated last year
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"β247Updated 3 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.β33Updated last year
- β394Updated 7 months ago
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real Worldβ81Updated 3 months ago
- A Benchmark for Low-Level Manipulation in Home Rearrangement Tasksβ144Updated 3 weeks ago