GigaAI-research / General-World-Models-SurveyLinks
☆402Updated last year
Alternatives and similar repositories for General-World-Models-Survey
Users that are interested in General-World-Models-Survey are comparing it to the libraries listed below
Sorting:
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆171Updated this week
- A curated list of world models for autonomous driving. Keep updated.☆336Updated this week
- Official repo and evaluation implementation of VSI-Bench☆522Updated 3 months ago
- [NeurIPS 2024] A Generalizable World Model for Autonomous Driving☆753Updated 6 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆247Updated 2 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆117Updated 11 months ago
- Collect some World Models for Autonomous Driving (and Robotic) papers.☆1,097Updated this week
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆238Updated 3 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆602Updated last week
- Awesome Papers about World Models in Autonomous Driving☆81Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆443Updated 2 months ago
- Code for "DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT"☆191Updated 5 months ago
- An open source code repository of driving world models, with training, inferencing, evaluation tools, and pretrained checkpoints.☆255Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆312Updated 5 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆211Updated 6 months ago
- [CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"☆231Updated 10 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆77Updated 4 months ago
- [CVPR 2024] A world model for autonomous driving.☆359Updated last year
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆531Updated 7 months ago
- [CVPR2024 Highlight] Editable Scene Simulation for Autonomous Driving via LLM-Agent Collaboration☆392Updated 6 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆474Updated this week
- A Language Agent for Autonomous Driving☆264Updated last year
- [CVPR 2024 Highlight] GenAD: Generalized Predictive Model for Autonomous Driving☆729Updated 5 months ago
- [WACV 2024 Survey Paper] Multimodal Large Language Models for Autonomous Driving☆288Updated last year
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆448Updated 3 weeks ago
- [AAAI 2025] DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation☆184Updated 3 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆197Updated 5 months ago
- ☆363Updated 5 months ago
- Bridging Large Vision-Language Models and End-to-End Autonomous Driving☆399Updated 6 months ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆501Updated 6 months ago