turingmotors / NuScenes-MQALinks
Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.
☆29Updated last year
Alternatives and similar repositories for NuScenes-MQA
Users that are interested in NuScenes-MQA are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆31Updated last year
- [AAAI2025] Language Prompt for Autonomous Driving☆141Updated 7 months ago
- Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆89Updated last year
- Benchmark and model for step-by-step reasoning in autonomous driving.☆61Updated 4 months ago
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆92Updated 7 months ago
- ☆45Updated last month
- ☆63Updated 11 months ago
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆112Updated 8 months ago
- [ICCV'25] Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆90Updated last week
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆197Updated 8 months ago
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆142Updated last year
- ☆76Updated 6 months ago
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆96Updated last month
- ☆179Updated last year
- ☆76Updated 4 months ago
- [NeurIPS 2024] Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving☆130Updated 6 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆78Updated 5 months ago
- ☆26Updated last year
- Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving (ICCV 2025)☆22Updated last month
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆101Updated 9 months ago
- ☆80Updated 8 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆98Updated 5 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆198Updated 2 weeks ago
- [Official] [IROS 2024] A goal-oriented planning to lift VLN performance for Closed-Loop Navigation: Simple, Yet Effective☆28Updated last year
- 【IEEE T-IV】A systematic survey of multi-modal and multi-task visual understanding foundation models for driving scenarios☆51Updated last year
- [Communication in Transprotation Reasearch] Official PyTorch Implementation of ''GPT-4 enhanced multimodal grounding for autonomous driv…☆24Updated 8 months ago
- [ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"☆175Updated 9 months ago
- ☆91Updated 9 months ago
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆87Updated 2 months ago
- 📚 A collection of resources and papers on Large Language Models in autonomous driving☆27Updated last year