google-deepmind / robovqaLinks
☆36Updated 2 years ago
Alternatives and similar repositories for robovqa
Users that are interested in robovqa are comparing it to the libraries listed below
Sorting:
- ☆89Updated last year
- ☆62Updated last year
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆40Updated last year
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Updated last year
- ☆138Updated 6 months ago
- ☆79Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆114Updated 9 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆69Updated last year
- ☆47Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆107Updated 10 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆155Updated 9 months ago
- ☆44Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆159Updated 3 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 7 months ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆101Updated 4 months ago
- Streaming Diffusion Policy: Fast Policy Synthesis with Variable Noise Diffusion Models☆74Updated 8 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆40Updated last year
- Official code for "QueST: Self-Supervised Skill Abstractions for Continuous Control" [NeurIPS 2024]☆104Updated last year
- ☆43Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 8 months ago
- ☆76Updated last year
- The official codebase for running the experiments described in the AVDC paper.☆19Updated last year
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆246Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆94Updated last year
- Codebase for HiP☆90Updated 2 years ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆117Updated 4 months ago
- Interactive Post-Training for Vision-Language-Action Models☆158Updated 7 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆30Updated 10 months ago