taco-group / LangCoopLinks
🏆 Official implementation of LangCoop: Collaborative Driving with Natural Language
☆69Updated 3 months ago
Alternatives and similar repositories for LangCoop
Users that are interested in LangCoop are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models☆74Updated 2 months ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆67Updated 9 months ago
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆82Updated last year
- [TMLR'25] AutoTrust, a groundbreaking benchmark designed to assess the trustworthiness of DriveVLMs. This work aims to enhance public saf…☆52Updated last month
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆32Updated last year
- [ICLR'25] Official Implementation of STAMP: Scalable Task And Model-agnostic Collaborative Perception☆51Updated 10 months ago
- ☆70Updated last year
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t …☆115Updated last year
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆110Updated 10 months ago
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆70Updated last year
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆301Updated 8 months ago
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆98Updated last year
- [ICCV 2025] Long-term Traffic Simulation with Interleaved Autoregressive Motion and Scenario Generation.☆49Updated 3 months ago
- ☆90Updated last year
- [IROS'25] CoMamba: Real-time Cooperative Perception Unlocked with State Space Models☆25Updated last year
- Official implementation of AirV2X: Unified Air-Ground\\Vehicle-to-Everything Collaboration☆46Updated last month
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆217Updated last year
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆107Updated 6 months ago
- [CVPR 2024] MAPLM: A Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding☆163Updated 2 years ago
- ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving☆369Updated 2 weeks ago
- Adapting VLMs to Bench2Drive.☆171Updated 2 months ago
- [ECCV 2024] The official code for "Dolphins: Multimodal Language Model for Driving“☆86Updated 10 months ago
- Official repository for the NuScenes-MQA. This paper is accepted by LLVA-AD Workshop at WACV 2024.☆35Updated 2 years ago
- ☆15Updated last year
- Official code of “MindDrive: A Vision-Language-Action Model for Autonomous Driving via Online Reinforcement Learning”☆69Updated this week
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆208Updated 5 months ago
- Code for CVPR2025 paper: Generating Multimodal Driving Scenes via Next-Scene Prediction☆97Updated last month
- A Language Agent for Autonomous Driving☆288Updated last week
- [AAAI 2025] DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation☆213Updated 8 months ago
- ☆23Updated 4 months ago