google-deepmind / robovqaLinks
☆26Updated last year
Alternatives and similar repositories for robovqa
Users that are interested in robovqa are comparing it to the libraries listed below
Sorting:
- ☆77Updated 11 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆33Updated 9 months ago
- ☆53Updated 7 months ago
- ☆69Updated 9 months ago
- ☆106Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆120Updated 2 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆72Updated 7 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆96Updated 3 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated 2 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated last year
- ☆44Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆87Updated 4 months ago
- official implementation for our paper Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance (CoRL 2024)☆33Updated 3 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆60Updated 7 months ago
- ☆76Updated 2 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 2 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆50Updated 3 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆32Updated last year
- ☆40Updated 11 months ago
- ☆44Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆135Updated 4 months ago
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆106Updated last week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆104Updated this week
- Codebase for HiP☆90Updated last year
- Streaming Diffusion Policy: Fast Policy Synthesis with Variable Noise Diffusion Models☆65Updated 2 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆38Updated 4 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆84Updated 3 weeks ago
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 6 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- The official codebase for running the experiments described in the AVDC paper.☆17Updated 10 months ago