bdaiinstitute / theiaLinks
Theia: Distilling Diverse Vision Foundation Models for Robot Learning
☆257Updated this week
Alternatives and similar repositories for theia
Users that are interested in theia are comparing it to the libraries listed below
Sorting:
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆225Updated 7 months ago
- Official Repository for MolmoAct☆244Updated 2 weeks ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆198Updated 3 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆109Updated 6 months ago
- Official Repository for SAM2Act☆211Updated 2 months ago
- ☆297Updated 7 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆280Updated 7 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆281Updated 3 months ago
- Code for subgoal synthesis via image editing☆143Updated 2 years ago
- ☆60Updated 10 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆120Updated last year
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆156Updated last year
- ICCV2025☆140Updated 2 months ago
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆146Updated last month
- Embodied Reasoning Question Answer (ERQA) Benchmark☆238Updated 7 months ago
- ☆233Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆398Updated 9 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆233Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆89Updated last year
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆358Updated 2 weeks ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆285Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆176Updated 2 months ago
- ☆266Updated last year
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆209Updated last month
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆185Updated last year
- ☆123Updated 2 years ago
- Autoregressive Policy for Robot Learning (RA-L 2025)☆140Updated 7 months ago
- ☆210Updated 3 weeks ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆100Updated 7 months ago
- Unified Vision-Language-Action Model☆223Updated 3 weeks ago