Psi-Robot / DexGraspVLAView external linksLinks
[AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping
☆467Aug 10, 2025Updated 6 months ago
Alternatives and similar repositories for DexGraspVLA
Users that are interested in DexGraspVLA are comparing it to the libraries listed below
Sorting:
- CVPR 2025(Highlight) DexGraspAnything: Towards Universal Robotic Dexterous Grasping with Physics Awareness☆204Dec 22, 2025Updated last month
- ☆386Jan 6, 2025Updated last year
- ☆179Mar 22, 2025Updated 10 months ago
- [CoRL 2024] DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes☆125Jan 23, 2025Updated last year
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,019Sep 9, 2025Updated 5 months ago
- [CVPR 2025] 🎉 Official repository of "ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning"☆280Oct 10, 2025Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Oct 3, 2025Updated 4 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,614Jan 21, 2026Updated 3 weeks ago
- ☆38Apr 15, 2025Updated 9 months ago
- Official code repository of paper "D(R, O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous…☆257Nov 13, 2025Updated 3 months ago
- [IROS 2025] Generalizable Humanoid Manipulation with 3D Diffusion Policies. Part 1: Train & Deploy of iDP3☆501Jun 16, 2025Updated 7 months ago
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆906Feb 20, 2025Updated 11 months ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆234Jul 18, 2025Updated 6 months ago
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆320Nov 11, 2025Updated 3 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.