Open-X-Humanoid / pelican-vlLinks
Pelican-VL 1.0 is a new family of open-source embodied brain models with parameter scales ranging from 7B to 72B developed by WFM System Group, Beijing Innovation Center of Humanoid Robotics (X-Humanoid).
β66Updated last month
Alternatives and similar repositories for pelican-vl
Users that are interested in pelican-vl are comparing it to the libraries listed below
Sorting:
- π€ RoboOS: A Universal Embodied Operating System for Cross-Embodied and Multi-Robot Collaborationβ280Updated last month
- Official Algorithm Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Aβ¦β159Updated 4 months ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β358Updated 3 months ago
- Rynn Robotics Context Protocolβ118Updated this week
- Running VLA at 30Hz frame rate and 480Hz trajectory frequencyβ373Updated this week
- Galaxea's first VLA releaseβ498Updated this week
- β78Updated 10 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ277Updated last month
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasksβ205Updated 2 weeks ago
- β103Updated 2 months ago
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Modelsβ134Updated last year
- β61Updated 9 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulationββ283Updated this week
- β788Updated 3 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ328Updated 8 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β224Updated 2 months ago
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Acβ¦β135Updated 2 months ago
- Official Implementation of "Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance".β63Updated 3 months ago
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignmentβ203Updated last month
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Conditionβ143Updated 6 months ago
- Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoningβ138Updated 5 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ185Updated 3 months ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"β436Updated last week
- VLA-Arena is an open-source benchmark for systematic evaluation of Vision-Language-Action (VLA) models.β101Updated last week
- Official implementation of TrajBoosterβ166Updated 3 weeks ago
- β53Updated last week
- Cross-embOdiment Mobility Policy via ResiduAl RL and Skill Synthesisβ81Updated 2 months ago
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Graspingβ462Updated 5 months ago
- A unified, agentic system for general-purpose robots, enabling multi-modal perception, mapping and localization, and autonomous mobility β¦β85Updated last week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`β149Updated last year