airs-cuhk / airspeedLinks
☆40Updated 3 months ago
Alternatives and similar repositories for airspeed
Users that are interested in airspeed are comparing it to the libraries listed below
Sorting:
- 🤖 RoboOS: A Universal Embodied Operating System for Cross-Embodied and Multi-Robot Collaboration☆290Updated last month
- Rynn Robotics Context Protocol☆122Updated 3 weeks ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆364Updated 4 months ago
- ☆48Updated 5 months ago
- Simulation Platform from AgiBot☆609Updated 3 weeks ago
- ☆307Updated 10 months ago
- ☆175Updated 3 weeks ago
- ☆84Updated 10 months ago
- ☆243Updated 2 weeks ago
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆207Updated last month
- Official Algorithm Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household A…☆162Updated 5 months ago
- [RSS 2024] "DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation" code repository☆351Updated last year
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models☆136Updated last year
- This repo is designed for General Robotic Operation System☆144Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆645Updated 7 months ago
- 基于InternLM2大模型的离线具身智能导盲犬☆112Updated last year
- ☆383Updated last month
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆229Updated 3 months ago
- ☆864Updated 4 months ago
- Reference workflow for generating large amounts of synthetic motion trajectories for robot manipulation from a few human demonstrations.☆184Updated 8 months ago
- A unified, agentic system for general-purpose robots, enabling multi-modal perception, mapping and localization, and autonomous mobility …☆111Updated last week
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆618Updated last year
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆135Updated 2 months ago
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆467Updated 6 months ago
- 本项目为参加达闼杯“机器人大模型与具身智能挑战赛”的参赛作品。我们的目标是结合前沿的大模型技术和具身智能技术,开发能在模拟的咖啡厅场景中承担服务员角色并自主完成各种具身任务的智能机器人。这里是我们的参赛作品《基于大模型和行为树和生成式具身智能体》的机器人控制端代码。☆104Updated last year
- [ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆502Updated last week
- A Pragmatic VLA Foundation Model☆771Updated last week
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆984Updated 2 months ago
- Building General-Purpose Robots Based on Embodied Foundation Model☆759Updated last week
- Galaxea's open-source VLA repository☆513Updated 3 weeks ago