ayesha-ishaq / DriveLMM-o1Links
Benchmark and model for step-by-step reasoning in autonomous driving.
☆67Updated 8 months ago
Alternatives and similar repositories for DriveLMM-o1
Users that are interested in DriveLMM-o1 are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆97Updated last year
- ☆70Updated last year
- [NeurIPS 2025] SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models☆74Updated 2 months ago
- [ECCV 2024] Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆96Updated last year
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆84Updated 9 months ago
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆32Updated last year
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆109Updated 10 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆115Updated last year
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆163Updated 2 years ago
- [AAAI2025] Language Prompt for Autonomous Driving☆150Updated 2 months ago
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆79Updated last year
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆107Updated 6 months ago
- Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.☆35Updated last year
- Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving (ICCV 2025)☆33Updated 6 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆206Updated 5 months ago
- ☆90Updated last year
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆115Updated last year
- ☆73Updated 3 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆216Updated last year
- Simulator designed to generate diverse driving scenarios.☆43Updated 9 months ago
- Official repository for paper "Can LVLMs Obtain a Driver’s License? A Benchmark Towards Reliable AGI for Autonomous Driving"☆29Updated 6 months ago
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆69Updated last year
- ☆96Updated 11 months ago
- [ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"☆193Updated last year
- project page of "RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning"☆19Updated 2 months ago
- ☆180Updated last year
- ☆15Updated last year
- [Official] [IROS 2024] A goal-oriented planning to lift VLN performance for Closed-Loop Navigation: Simple, Yet Effective☆29Updated last year
- ☆124Updated last year
- ☆92Updated last year