niejnan / OpenVLA
多模态具身智能大模型 OpenVLA 的复现以及在 LIBERO 数据集上的微调改进
☆88Updated last month
Alternatives and similar repositories for OpenVLA
Users that are interested in OpenVLA are comparing it to the libraries listed below
Sorting:
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆243Updated 2 weeks ago
- ☆116Updated 2 months ago
- [CVPR 25 Highlight & ECCV Workshop 24 Best Paper] RoboTwin Dual-arm Robot Manipulation Simulation Platform☆844Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆203Updated 2 weeks ago
- ☆323Updated last month
- 这是一个帮助新手通过 LeRobot 项目入门具身智能的中文教程☆33Updated 3 months ago
- A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.☆335Updated last week
- 🔥RSS2025 & CVPR2025 & ICLR2025 Embodied AI Paper List Resources. Star ⭐ the repo and follow me if you like what you see 🤩.☆280Updated this week
- ☆190Updated this week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆152Updated last month
- Codebase for the BestMan Mobile Manipulator Platform☆308Updated last month
- ☆100Updated last month
- ☆62Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆386Updated 2 weeks ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆255Updated 2 weeks ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo, and OpenVLA) in simulation under common setu…☆126Updated last week
- ☆344Updated 3 months ago
- [CVPR 2025 Highlight] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints☆121Updated last month
- ☆536Updated last month
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆200Updated this week
- Code for RoboFlamingo☆376Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆301Updated 2 weeks ago
- 🚀 A collection of utilities and tools for LeRobot.☆167Updated this week
- ☆135Updated last month
- It's not a list of papers, but a list of paper reading lists...☆186Updated 3 weeks ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆131Updated 10 months ago
- This is the official implementation of RoboBERT, which is a novel end-to-end mutiple-modality robotic operations training framework.☆48Updated 2 months ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆134Updated 3 weeks ago
- [Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]☆29Updated last month
- ☆75Updated 3 weeks ago