Robbyant / lingbot-vlaLinks
A Pragmatic VLA Foundation Model
☆247Updated this week
Alternatives and similar repositories for lingbot-vla
Users that are interested in lingbot-vla are comparing it to the libraries listed below
Sorting:
- ☆367Updated last week
- VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos☆273Updated last week
- WoW (World-Omniscient World Model) is a generative world model trained on 2 million robotic interaction trajectories, designed to imagine…☆135Updated 3 weeks ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆465Updated 2 weeks ago
- Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model☆170Updated 3 weeks ago
- Towards a Generative 3D World Engine for Embodied Intelligence☆385Updated this week
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆259Updated last month
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆280Updated last week
- Official implementation of "RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics"☆53Updated last week
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆173Updated 7 months ago
- ☆165Updated 3 weeks ago
- PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation☆315Updated 3 weeks ago
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆172Updated 7 months ago
- ☆223Updated 3 months ago
- [NeurIPS 2025] InternScenes: A Large-scale Interactive Indoor Scene Dataset with Realistic Layouts.☆219Updated 3 months ago
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Condition☆144Updated 6 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆121Updated 3 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆126Updated 5 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆305Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆319Updated 6 months ago
- Official code for EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models☆96Updated 7 months ago
- [ICLR 2026] Unified Vision-Language-Action Model☆268Updated 3 months ago
- EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video☆123Updated 5 months ago
- Sim-to-real and CDM inference code for ManipAsInSim project.☆137Updated last month
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆108Updated 2 months ago
- Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization☆313Updated this week
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆156Updated 3 weeks ago
- Galaxea's first VLA release☆503Updated last week
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆344Updated 3 weeks ago
- InternRobotics' open-source toolbox for vision-based embodied spatial intelligence.☆47Updated 4 months ago