ayesha-ishaq / DriveLMM-o1Links
Benchmark and model for step-by-step reasoning in autonomous driving.
☆56Updated 3 months ago
Alternatives and similar repositories for DriveLMM-o1
Users that are interested in DriveLMM-o1 are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆92Updated 6 months ago
- ☆41Updated 3 weeks ago
- ☆62Updated 10 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆94Updated 5 months ago
- Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆85Updated last year
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆69Updated 6 months ago
- Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆81Updated 4 months ago
- [AAAI2025] Language Prompt for Autonomous Driving☆139Updated 6 months ago
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆31Updated last year
- Simulator designed to generate diverse driving scenarios.☆40Updated 3 months ago
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆140Updated last year
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆92Updated 3 weeks ago
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆61Updated 7 months ago
- Official repository for paper "Can LVLMs Obtain a Driver’s License? A Benchmark Towards Reliable AGI for Autonomous Driving"☆29Updated last month
- ☆73Updated 6 months ago
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆110Updated 7 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆98Updated 8 months ago
- [NeurIPS 2024] Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving☆129Updated 5 months ago
- ☆79Updated last week
- Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving☆21Updated 3 weeks ago
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆66Updated last month
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆238Updated 3 months ago
- Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.☆25Updated last year
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆197Updated 5 months ago
- ☆76Updated 3 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆195Updated 7 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆77Updated 4 months ago
- End-to-End Driving with Online Trajectory Evaluation via BEV World Model☆87Updated 2 months ago
- ☆24Updated 5 months ago
- ☆179Updated last year