turingmotors / NuScenes-MQALinks
Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.
☆31Updated last year
Alternatives and similar repositories for NuScenes-MQA
Users that are interested in NuScenes-MQA are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆31Updated last year
- [AAAI2025] Language Prompt for Autonomous Driving☆149Updated last month
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆112Updated 11 months ago
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆154Updated last year
- [ECCV 2024] Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆94Updated last year
- ☆68Updated last year
- ☆92Updated 10 months ago
- [ICCV 2025] Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆123Updated 3 months ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆66Updated 7 months ago
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆95Updated 10 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆213Updated 11 months ago
- [NeurIPS 2024] Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving☆136Updated 9 months ago
- ☆86Updated 11 months ago
- ☆177Updated last year
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆83Updated 8 months ago
- Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving (ICCV 2025)☆27Updated 5 months ago
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆103Updated 5 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆101Updated 9 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆203Updated 3 months ago
- 【IEEE T-IV】A systematic survey of multi-modal and multi-task visual understanding foundation models for driving scenarios☆50Updated last year
- [NeurIPS 2025] SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models☆69Updated last month
- ☆75Updated 2 months ago
- The official implementation of the ECCV 2024 paper: Continuity Preserving Online CenterLine Graph Learning☆32Updated 10 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆111Updated last year
- Simulator designed to generate diverse driving scenarios.☆43Updated 8 months ago
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆116Updated 2 months ago
- [ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"☆188Updated last year
- ☆91Updated last year
- Official repository for paper "Can LVLMs Obtain a Driver’s License? A Benchmark Towards Reliable AGI for Autonomous Driving"☆29Updated 5 months ago
- 📚 A collection of resources and papers on Large Language Models in autonomous driving☆27Updated 2 years ago