nvidia-cosmos / cosmos-reason1
Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
☆317Updated last month
Alternatives and similar repositories for cosmos-reason1
Users that are interested in cosmos-reason1 are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆248Updated 3 months ago
- Cosmos-Transfer1 is a world-to-world transfer model designed to bridge the perceptual divide between simulated and real-world environment…☆401Updated this week
- Official repo and evaluation implementation of VSI-Bench☆481Updated 2 months ago
- ☆344Updated 3 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆153Updated 2 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆239Updated last month
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆301Updated 2 weeks ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆187Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆71Updated last week
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆386Updated 2 weeks ago
- Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence☆418Updated last week
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆250Updated last year
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆255Updated 2 weeks ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆189Updated last month
- Cosmos-Predict1 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆193Updated this week
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆503Updated 6 months ago
- Compose multimodal datasets 🎹☆371Updated 3 weeks ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆210Updated last month
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)☆675Updated 10 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆135Updated this week
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆117Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆217Updated 2 weeks ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆216Updated last year
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆315Updated 3 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆119Updated last month
- Pytorch implementation of "Genie: Generative Interactive Environments", Bruce et al. (2024).☆152Updated 8 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆437Updated 3 weeks ago
- ☆187Updated last month
- ☆159Updated 2 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆281Updated 7 months ago