Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
☆923Jan 6, 2026Updated 2 months ago
Alternatives and similar repositories for cosmos-reason1
Users that are interested in cosmos-reason1 are comparing it to the libraries listed below
Sorting:
- Cosmos-Predict1 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆415Jan 6, 2026Updated 2 months ago
- Cosmos-Transfer1 is a world-to-world transfer model designed to bridge the perceptual divide between simulated and real-world environment…☆782Jan 6, 2026Updated 2 months ago
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,804Dec 16, 2025Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,461Mar 23, 2025Updated 11 months ago
- Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆749Oct 29, 2025Updated 4 months ago
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,084Jan 6, 2026Updated 2 months ago
- NVIDIA Isaac GR00T N1.6 - A Foundation Model for Generalist Robots.☆6,345Updated this week
- Cosmos-RL is a flexible and scalable Reinforcement Learning framework specialized for Physical AI applications.☆345Updated this week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆478Jan 22, 2025Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆342Jul 23, 2025Updated 7 months ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆371Oct 13, 2025Updated 4 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆234Nov 6, 2025Updated 4 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆262Mar 12, 2025Updated 11 months ago
- ☆10,475Dec 27, 2025Updated 2 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆991Dec 20, 2025Updated 2 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆1,017Nov 19, 2025Updated 3 months ago
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,560Jul 31, 2024Updated last year
- RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots☆1,165Mar 2, 2026Updated last week
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,680Mar 2, 2026Updated last week
- Official repo and evaluation implementation of VSI-Bench☆679Aug 5, 2025Updated 7 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,771Nov 28, 2025Updated 3 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆3,097Aug 28, 2025Updated 6 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,057Sep 9, 2025Updated 6 months ago
- ☆443Nov 29, 2025Updated 3 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,632Jan 21, 2026Updated last month
- ☆89Sep 23, 2025Updated 5 months ago
- ☆1,697Nov 5, 2025Updated 4 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,404Jan 31, 2025Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆665Jun 23, 2025Updated 8 months ago
- ☆43Apr 15, 2025Updated 10 months ago
- world modeling challenge for humanoid robots☆554Nov 8, 2024Updated last year
- This code corresponds to simulation environments used as part of the MimicGen project.☆552Aug 16, 2025Updated 6 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆623Oct 29, 2024Updated last year
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆531Dec 6, 2024Updated last year
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,629Updated this week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,181Feb 11, 2026Updated 3 weeks ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆227Mar 29, 2025Updated 11 months ago
- A suite of image and video neural tokenizers☆1,714Feb 11, 2025Updated last year
- ICCV 2025 | TesserAct: Learning 4D Embodied World Models☆382Aug 4, 2025Updated 7 months ago