bdaiinstitute / theiaLinks
Theia: Distilling Diverse Vision Foundation Models for Robot Learning
☆265Updated 2 months ago
Alternatives and similar repositories for theia
Users that are interested in theia are comparing it to the libraries listed below
Sorting:
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆209Updated 5 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 9 months ago
- Official Repository for MolmoAct☆281Updated last month
- Official Repository for SAM2Act☆219Updated 4 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆112Updated 8 months ago
- Code for subgoal synthesis via image editing☆144Updated 2 years ago
- ☆350Updated 9 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆308Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆323Updated 9 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆175Updated 3 months ago
- ☆130Updated 3 months ago
- VLA-0: Building State-of-the-Art VLAs with Zero Modification☆423Updated this week
- ☆62Updated last year
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆165Updated last year
- Embodied Reasoning Question Answer (ERQA) Benchmark☆254Updated 9 months ago
- Unified Vision-Language-Action Model☆257Updated 2 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆434Updated 11 months ago
- ☆64Updated last year
- ICCV2025☆145Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆297Updated last year
- ☆259Updated last year
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆192Updated last year
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆254Updated 3 months ago
- ☆271Updated last year
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆100Updated last year
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆226Updated this week
- F3RM: Feature Fields for Robotic Manipulation. Official repo for the paper "Distilled Feature Fields Enable Few-Shot Language-Guided Mani…☆215Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆197Updated last year
- ☆418Updated last month