taco-group / AirV2X-PerceptionLinks
Official implementation of AirV2X: Unified Air-Ground\\Vehicle-to-Everything Collaboration
☆53Updated 2 months ago
Alternatives and similar repositories for AirV2X-Perception
Users that are interested in AirV2X-Perception are comparing it to the libraries listed below
Sorting:
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆315Updated 10 months ago
- This repo contains the code for paper "LightEMMA: Lightweight End-to-End Multimodal Model for Autonomous Driving"☆134Updated 2 months ago
- CoLMDriver: LLM-based Negotiation Benefits Cooperative Autonomous Driving☆42Updated 3 months ago
- All you need for Multi-Agent Embodied Autonomous Driving (MAAD)☆39Updated 8 months ago
- ☆80Updated 2 months ago
- ☆157Updated 8 months ago
- [ICLR 2026] ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving☆432Updated this week
- Griffin: Aerial-Ground Cooperative Detection and Tracking Benchmark☆88Updated 5 months ago
- [NeurIPS 2025] AutoVLA: A Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-T…☆373Updated last month
- ☆91Updated last year
- Learning to Drive with GPT☆297Updated 2 years ago
- [CVPR 2025, Spotlight] SimLingo (CarLLava): Vision-Only Closed-Loop Autonomous Driving with Language-Action Alignment☆336Updated 5 months ago
- [ICLR 2024] DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models☆300Updated last year
- ☆101Updated last year
- ☆175Updated 4 months ago
- Awesome CoT for Autonomous Driving☆65Updated 7 months ago
- [AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.☆225Updated last year
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆209Updated 7 months ago
- [ICLR 2025] DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving☆184Updated 3 weeks ago
- [ICCV 2025] Official code of "ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation"☆569Updated last month
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆100Updated last year
- Bridging Large Vision-Language Models and End-to-End Autonomous Driving☆518Updated last year
- VLM-RL: A Unified Vision Language Models and Reinforcement Learning Framework for Safe Autonomous Driving☆226Updated 4 months ago
- Track 1: Driving with Language☆27Updated 5 months ago
- Repo of "GoalFlow: Goal-Driven Flow Matching for Multimodal Trajectories Generation in End-to-End Autonomous Driving"☆345Updated 6 months ago
- ☆184Updated 2 years ago
- [ECCV 2024] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving☆110Updated 8 months ago
- ☆42Updated 4 months ago
- [NeurIPS 2025] SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models☆78Updated 4 months ago
- A Language Agent for Autonomous Driving☆291Updated last month