declare-lab / noraLinks
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
☆174Updated 2 months ago
Alternatives and similar repositories for nora
Users that are interested in nora are comparing it to the libraries listed below
Sorting:
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆111Updated 7 months ago
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World | CoRL 2025☆81Updated 4 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆267Updated 6 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆225Updated 6 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆138Updated 9 months ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆82Updated 2 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆142Updated 6 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆107Updated last month
- ☆66Updated 7 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆171Updated 2 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆202Updated 6 months ago
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆319Updated last month
- Latest Advances on Vison-Language-Action Models.☆112Updated 7 months ago
- Official Repository for MolmoAct☆205Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆187Updated 4 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆93Updated 6 months ago
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation☆67Updated 2 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆289Updated last week
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆172Updated 2 weeks ago
- ☆266Updated last year
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆167Updated 8 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆106Updated 5 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆381Updated 8 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆112Updated last month
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆281Updated last month
- Galaxea's first VLA release☆279Updated this week
- Unified Vision-Language-Action Model☆203Updated 2 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆234Updated 2 weeks ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆309Updated 6 months ago