baaivision / UniVLAView external linksLinks
[ICLR 2026] Unified Vision-Language-Action Model
☆274Oct 15, 2025Updated 3 months ago
Alternatives and similar repositories for UniVLA
Users that are interested in UniVLA are comparing it to the libraries listed below
Sorting:
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆984Nov 19, 2025Updated 2 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆968Dec 20, 2025Updated last month
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆618Oct 29, 2024Updated last year
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆363Jan 4, 2026Updated last month
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆96Jan 27, 2026Updated 2 weeks ago
- ☆432Nov 29, 2025Updated 2 months ago
- Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System☆24Dec 24, 2025Updated last month
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆110Nov 15, 2025Updated 2 months ago
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,380Jan 6, 2026Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆162Oct 1, 2025Updated 4 months ago
- Distributed, scalable benchmarking of generalist robot policies.☆81Updated this week
- Official repo for From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models☆32Nov 2, 2025Updated 3 months ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆875Dec 2, 2025Updated 2 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆381Nov 11, 2025Updated 3 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,524Feb 5, 2026Updated last week
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,251Mar 23, 2025Updated 10 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆277Jul 8, 2025Updated 7 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,019Sep 9, 2025Updated 5 months ago
- Official PyTorch Implementation of "Latent Denoising Makes Good Visual Tokenizers"☆172Dec 17, 2025Updated last month
- ICCV2025☆153Dec 10, 2025Updated 2 months ago
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆473Oct 24, 2025Updated 3 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆202Nov 13, 2024Updated last year
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Oct 3, 2025Updated 4 months ago
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆55Jan 20, 2026Updated 3 weeks ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆364Apr 5, 2025Updated 10 months ago
- ☆13Jan 22, 2025Updated last year
- [ICML 2025] The Official Implementation of "Efficient Robotic Policy Learning via Latent Space Backward Planning"☆30Dec 15, 2025Updated last month
- [CVPR 2024] Hierarchical Diffusion Policy for Multi-Task Robotic Manipulation☆222Apr 9, 2024Updated last year
- Official code of RDT 2☆686Feb 7, 2026Updated last week
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆334Dec 29, 2025Updated last month
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆46Jul 17, 2025Updated 6 months ago
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆186Oct 8, 2025Updated 4 months ago
- [TPAMI 2023] Object Affinity Learning: Towards Annotation-free Instance Segmentation☆14Sep 14, 2023Updated 2 years ago
- Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries☆34Nov 19, 2025Updated 2 months ago
- Code for "ACG: Action Coherence Guidance for Flow-based VLA Models" (ICRA 2026)☆59Feb 3, 2026Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆331Jul 23, 2025Updated 6 months ago
- A Bimanual-mobile Robot Manipulation Dataset specifically designed for household applications☆16Aug 12, 2024Updated last year
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,459Mar 15, 2025Updated 10 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆460Jan 22, 2025Updated last year