ASGuard-UCI / MSF-ADVLinks
MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) perception in the victim autonomous vehicle (AV) to fail in detecting a front obstacle and thus crash into it. This work is accepted by IEEE S&P 2021.
☆79Updated 3 years ago
Alternatives and similar repositories for MSF-ADV
Users that are interested in MSF-ADV are comparing it to the libraries listed below
Sorting:
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆35Updated 3 years ago
- ☆11Updated last year
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆14Updated last year
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆27Updated last year
- An awesome & curated list of autonomous driving security papers☆42Updated 2 weeks ago
- ☆53Updated last month
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- https://arxiv.org/pdf/1906.11897.pdf☆21Updated 3 years ago
- A Paperlist of Adversarial Attack on Object Detection☆121Updated 2 years ago
- Adversarial Texture for Fooling Person Detectors in the Physical World☆59Updated 8 months ago
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆17Updated 3 years ago
- ☆9Updated last year
- ☆25Updated 2 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆132Updated 2 years ago
- Grid Patch Attack for Object Detection☆43Updated 3 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆42Updated 2 years ago
- ☆29Updated 2 months ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆52Updated 2 months ago
- https://idrl-lab.github.io/Full-coverage-camouflage-adversarial-attack/☆50Updated 2 years ago
- [CVPR 2023] T-SEA: Transfer-based Self-Ensemble Attack on Object Detection☆108Updated 9 months ago
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆22Updated 2 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆22Updated last year
- A repository for the generation, visualization, and evaluation of patch based adversarial attacks on the yoloV3 object detection system☆20Updated 4 years ago
- A paper list for localized adversarial patch research☆154Updated last year
- A Leaderboard for Certifiable Robustness against Adversarial Patch Attacks☆21Updated last year
- Code for the paper "PAD: Patch-Agnostic Defense against Adversarial Patch Attacks" (CVPR 2024)☆23Updated last year
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago
- Code and data for PAN and PAN-phys.☆12Updated 2 years ago
- ☆36Updated 2 years ago