VincentNi0107 / BadVLMDriver
☆9Updated last year
Alternatives and similar repositories for BadVLMDriver
Users that are interested in BadVLMDriver are comparing it to the libraries listed below
Sorting:
- [TMLR'24] On the Adversarial Robustness of Camera-based 3D Object Detection☆25Updated last year
- ☆49Updated last month
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆12Updated last year
- Focused on the safety and security of Embodied AI☆35Updated last week
- [CVPR 2022] Shape-invariant Adversarial Point Clouds☆46Updated last year
- [CVPR 2023] Understanding the Robustness of 3D Object Detection With Bird's-Eye-View Representations in Autonomous Driving☆29Updated last year
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 3 years ago
- ☆42Updated 5 months ago
- AutoTrust, a groundbreaking benchmark designed to assess the trustworthiness of DriveVLMs. This work aims to enhance public safety by ens…☆45Updated 4 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆48Updated 4 months ago
- A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.☆41Updated 3 weeks ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆53Updated last year
- ☆12Updated last year
- ☆25Updated 2 years ago
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆22Updated 2 years ago
- [CVPR 2024] LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs☆30Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆53Updated last month
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 2 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆36Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆33Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆25Updated 5 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆60Updated last year
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆25Updated 5 months ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆23Updated 6 months ago
- ☆18Updated last year
- Repo for USENIX security 2024 paper "On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures" https://arxi…☆17Updated 8 months ago
- [NeurIPS 2023] Content-based Unrestricted Adversarial Attack☆25Updated last year
- ☆34Updated 2 years ago
- This repository contains the ViewFool and ImageNet-V proposed by the paper “ViewFool: Evaluating the Robustness of Visual Recognition to …☆32Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year