LLVM-AD / MAPLMLinks
[CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding
☆164Updated 2 years ago
Alternatives and similar repositories for MAPLM
Users that are interested in MAPLM are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆208Updated 7 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆226Updated last year
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆118Updated last year
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆100Updated last year
- [AAAI2025] Language Prompt for Autonomous Driving☆154Updated 4 months ago
- [NeurIPS 2024] Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving☆141Updated last year
- ☆185Updated 2 years ago
- [ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"☆201Updated last year
- ☆101Updated last year
- [ECCV 2024] Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆98Updated 2 years ago
- (ICLR2025) Enhancing End-to-End Autonomous Driving with Latent World Model☆315Updated 7 months ago
- ☆94Updated last year
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆116Updated last year
- ☆71Updated last year
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆111Updated 8 months ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆69Updated 10 months ago
- (ICCV2025) End-to-End Driving with Online Trajectory Evaluation via BEV World Model☆197Updated 7 months ago
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆134Updated 2 months ago
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆315Updated 10 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆87Updated 11 months ago
- DrivePI: Spatial-aware 4D MLLM for Unified Autonomous Driving Understanding, Perception, Prediction and Planning☆78Updated last month