ASGuard-UCI / MSF-ADVLinks
MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) perception in the victim autonomous vehicle (AV) to fail in detecting a front obstacle and thus crash into it. This work is accepted by IEEE S&P 2021.
☆81Updated 4 years ago
Alternatives and similar repositories for MSF-ADV
Users that are interested in MSF-ADV are comparing it to the libraries listed below
Sorting:
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆38Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆28Updated 4 years ago
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆20Updated last year
- An awesome & curated list of autonomous driving security papers☆69Updated 3 weeks ago
- ☆28Updated 3 years ago
- ☆86Updated 3 months ago
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆32Updated 2 years ago
- ☆12Updated 2 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆24Updated 2 years ago
- https://arxiv.org/pdf/1906.11897.pdf☆23Updated 4 years ago
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆19Updated 3 years ago
- A repository for the generation, visualization, and evaluation of patch based adversarial attacks on the yoloV3 object detection system☆18Updated 4 years ago
- A Paperlist of Adversarial Attack on Object Detection☆125Updated 2 years ago
- https://idrl-lab.github.io/Full-coverage-camouflage-adversarial-attack/☆55Updated 3 years ago
- ☆40Updated 3 years ago
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆25Updated 3 years ago
- Adversarial Texture for Fooling Person Detectors in the Physical World☆61Updated last year
- ☆17Updated 2 months ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆111Updated 4 years ago
- Code and data for PAN and PAN-phys.☆13Updated 2 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆46Updated 2 years ago
- ICCV 2021☆32Updated 4 years ago
- Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles☆22Updated 2 years ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆51Updated 9 months ago
- ☆44Updated 5 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆134Updated 2 years ago
- Grid Patch Attack for Object Detection☆43Updated 3 years ago
- Paper sharing in adversary related works☆45Updated 5 months ago
- ☆67Updated 3 years ago
- The repository is dedicated to tracking the latest advances in the field of Physical Adversarial Attack (PAA).☆97Updated 7 months ago