nvidia-cosmos / cosmos-rlLinks
Cosmos-RL is a flexible and scalable Reinforcement Learning framework specialized for Physical AI applications.
☆304Updated this week
Alternatives and similar repositories for cosmos-rl
Users that are interested in cosmos-rl are comparing it to the libraries listed below
Sorting:
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆330Updated this week
- MiMo-Embodied☆345Updated 2 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆204Updated 3 months ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆102Updated 6 months ago
- Cosmos-Predict2.5, the latest version of the Cosmos World Foundation Models (WFMs) family, specialized for simulating and predicting the …☆735Updated last week
- Real-Time VLAs via Future-state-aware Asynchronous Inference.☆297Updated this week
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆194Updated 7 months ago
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆206Updated 3 weeks ago
- Cosmos-Reason2 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆149Updated last week
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆453Updated 3 months ago
- Running VLA at 30Hz frame rate and 480Hz trajectory frequency☆393Updated last week
- Cosmos-Predict1 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆397Updated 3 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆886Updated 3 weeks ago
- ☆178Updated this week
- Virtual Community: An Open World for Humans, Robots, and Society☆181Updated last month
- Cosmos-Transfer2.5, built on top of Cosmos-Predict2.5, produces high-quality world simulations conditioned on multiple spatial control in…☆422Updated this week
- Galaxea's first VLA release☆503Updated 2 weeks ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆255Updated 10 months ago
- [ICLR 2026] Unified Vision-Language-Action Model☆268Updated 3 months ago
- ☆367Updated last week
- VLA-0: Building State-of-the-Art VLAs with Zero Modification☆436Updated 3 weeks ago
- ☆238Updated last week
- Official repository for "Vid2World: Crafting Video Diffusion Models to Interactive World Models" (ICLR 2026), https://arxiv.org/abs/2505.…☆34Updated this week
- The official implementation of Mantis: A Versatile Vision-Language-Action Model with Disentangled Visual Foresight☆75Updated 2 weeks ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆329Updated 10 months ago
- Official Repository of VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents☆63Updated this week
- ☆61Updated 9 months ago
- Pytorch implementation of "Genie: Generative Interactive Environments", Bruce et al. (2024).☆255Updated last year
- ☆30Updated 6 months ago
- Official implementation of "RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics"☆53Updated last week