DLUT-LYZ / CODA-LMLinks
Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)
☆93Updated 10 months ago
Alternatives and similar repositories for CODA-LM
Users that are interested in CODA-LM are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving☆94Updated last year
- ☆67Updated last year
- [AAAI2025] Language Prompt for Autonomous Driving☆149Updated 2 weeks ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆66Updated 6 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆203Updated 3 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆210Updated 11 months ago
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆153Updated last year
- [ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"☆184Updated last year
- ☆179Updated last year
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆107Updated last year
- [ICCV 2025] Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆114Updated 2 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆99Updated 8 months ago
- ☆91Updated 9 months ago
- [NeurIPS 2025] SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models☆66Updated 2 weeks ago
- Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving (ICCV 2025)☆25Updated 4 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆82Updated 8 months ago
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆282Updated 6 months ago
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆31Updated last year
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆103Updated 4 months ago
- Talk2BEV: Language-Enhanced Bird's Eye View Maps (ICRA'24)☆113Updated 11 months ago
- [NeurIPS 2024] Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving☆135Updated 8 months ago
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆78Updated 10 months ago
- Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.☆31Updated last year
- ☆85Updated 10 months ago
- ☆92Updated last year
- ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving☆268Updated this week
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆114Updated last month
- [CVPR2024 Highlight] The official repo for paper "Abductive Ego-View Accident Video Understanding for Safe Driving Perception"☆62Updated 6 months ago
- (ICLR2025) Enhancing End-to-End Autonomous Driving with Latent World Model☆243Updated 3 months ago
- (ICCV2025) End-to-End Driving with Online Trajectory Evaluation via BEV World Model☆125Updated 3 months ago