nvidia-cosmos / cosmos-predict1Links
Cosmos-Predict1 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world models for downstream applications.
☆295Updated last month
Alternatives and similar repositories for cosmos-predict1
Users that are interested in cosmos-predict1 are comparing it to the libraries listed below
Sorting:
- Cosmos-Transfer1 is a world-to-world transfer model designed to bridge the perceptual divide between simulated and real-world environment…☆542Updated this week
- Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆384Updated this week
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆549Updated last week
- Official code for the CVPR 2025 paper "Navigation World Models".☆307Updated last week
- Generative World Explorer☆150Updated last month
- [ICCV 2025] Aether: Geometric-Aware Unified World Modeling☆388Updated last week
- Open source repo for Locate 3D Model, 3D-JEPA and Locate 3D Dataset☆336Updated last month
- [ICML 2025] Official PyTorch Implementation of "History-Guided Video Diffusion"☆403Updated 2 weeks ago
- ☆163Updated 4 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆125Updated last month
- (CVPR 2025 Highlight) The Scene Language: Representing Scenes with Programs, Words, and Embeddings☆224Updated this week
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆205Updated 3 weeks ago
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆290Updated 3 weeks ago
- ☆158Updated 2 months ago
- ☆131Updated 6 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆283Updated last week
- [ICLR 2025] Official Implementation of M3: 3D-Spatial Multimodal Memory☆167Updated 2 months ago
- WorldVLA: Towards Autoregressive Action World Model☆268Updated last week
- Orient Anything, ICML 2025☆292Updated 2 months ago
- Towards a Generative 3D World Engine for Embodied Intelligence☆246Updated last week
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction☆214Updated 2 weeks ago
- [ICLR 2025 Spotlight] MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility☆192Updated last week
- Benchmarking physical understanding in generative video models☆183Updated last month
- SceneFun3D ToolKit☆147Updated 3 months ago
- [CVPR 2024] Probing the 3D Awareness of Visual Foundation Models☆314Updated last year
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆109Updated 8 months ago
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆259Updated 8 months ago
- Official implementation for WorldScore: A Unified Evaluation Benchmark for World Generation☆118Updated 2 weeks ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆135Updated last month
- Unified Vision-Language-Action Model☆128Updated 2 weeks ago