humanoidintelligence / EI-BeginnerLinks
[Hi-Beginner] The Embodied/Humanoid Intelligence Introductory Practice of OpenMOSS Lab (Fudan&SII)
☆132Updated last week
Alternatives and similar repositories for EI-Beginner
Users that are interested in EI-Beginner are comparing it to the libraries listed below
Sorting:
- RoboScholar: A Comprehensive Paper List of Embodied AI and Robotics Research☆187Updated 3 months ago
- [PKU EPIC Lab] 面向小白的具身智能入门指南☆727Updated 2 months ago
- It's not a list of papers, but a list of paper reading lists...☆249Updated 9 months ago
- 具身智能领域华人学术排行榜☆99Updated 10 months ago
- A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation☆500Updated this week
- Pytorch PI-zero and PI-zero-fast. Adapted from LeRobot☆175Updated 5 months ago
- ☆175Updated 3 weeks ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆472Updated 2 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆339Updated last month
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆384Updated 3 months ago
- this project provide a verity of code help you collect data from your robotic arm, have fun!☆198Updated this week
- Official implementation of ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver.☆197Updated 2 weeks ago
- A curated list of recent robot learning papers incorporating diffusion models for robotics tasks.☆307Updated 7 months ago
- Fetching Embodied AI Paper from ArXiv automatically☆219Updated this week
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆208Updated 2 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆394Updated 3 months ago
- [ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆502Updated last week
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆168Updated 4 months ago
- [ICLR 2026] Towards Unified Latent VLA for Whole-body Loco-manipulation Control☆200Updated 3 weeks ago
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆319Updated 2 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Updated 4 months ago
- ☆227Updated 4 months ago
- [NeurIPS 2025] Flow x RL. "ReinFlow: Fine-tuning Flow Policy with Online Reinforcement Learning". Support VLAs e.g., pi0, pi0.5. Fully op…☆247Updated last month
- Generative Artificial Intelligence in Robotic Manipulation: A Survey☆85Updated 7 months ago
- ☆383Updated last month
- ☆234Updated 5 months ago
- An All-in-one robot manipulation learning suite for policy models training and evaluation on various datasets and benchmarks.☆169Updated 3 months ago
- A paper list of my history reading. Robotics, Learning, Vision.☆510Updated last month
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆225Updated 7 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,043Updated last week