ASGuard-UCI / MSF-ADVLinks
MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) perception in the victim autonomous vehicle (AV) to fail in detecting a front obstacle and thus crash into it. This work is accepted by IEEE S&P 2021.
☆80Updated 4 years ago
Alternatives and similar repositories for MSF-ADV
Users that are interested in MSF-ADV are comparing it to the libraries listed below
Sorting:
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆38Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- ☆12Updated last year
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆18Updated last year
- An awesome & curated list of autonomous driving security papers☆50Updated 2 weeks ago
- https://arxiv.org/pdf/1906.11897.pdf☆22Updated 3 years ago
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆30Updated 2 years ago
- ☆27Updated 3 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆23Updated 2 years ago
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆18Updated 3 years ago
- A Paperlist of Adversarial Attack on Object Detection☆123Updated 2 years ago
- ☆68Updated 3 weeks ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆52Updated 5 months ago
- A repository for the generation, visualization, and evaluation of patch based adversarial attacks on the yoloV3 object detection system☆19Updated 4 years ago
- Adversarial Texture for Fooling Person Detectors in the Physical World☆60Updated 11 months ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆24Updated 3 years ago
- https://idrl-lab.github.io/Full-coverage-camouflage-adversarial-attack/☆51Updated 2 years ago
- ☆36Updated 2 years ago
- Grid Patch Attack for Object Detection☆43Updated 3 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆133Updated 2 years ago
- ☆10Updated 3 months ago
- The repository is dedicated to tracking the latest advances in the field of Physical Adversarial Attack (PAA).☆95Updated 3 months ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆43Updated 2 years ago
- A Leaderboard for Certifiable Robustness against Adversarial Patch Attacks☆21Updated last year
- ☆16Updated 10 months ago
- ☆44Updated 5 years ago
- Paper sharing in adversary related works☆44Updated last month
- ICCV 2021☆26Updated 4 years ago
- Code and data for PAN and PAN-phys.☆13Updated 2 years ago