ASGuard-UCI / MSF-ADVLinks
MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) perception in the victim autonomous vehicle (AV) to fail in detecting a front obstacle and thus crash into it. This work is accepted by IEEE S&P 2021.
☆80Updated 4 years ago
Alternatives and similar repositories for MSF-ADV
Users that are interested in MSF-ADV are comparing it to the libraries listed below
Sorting:
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆38Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆28Updated 4 years ago
- ☆12Updated last year
- An awesome & curated list of autonomous driving security papers☆61Updated last month
- ☆75Updated last month
- The Pytorch implementation for the paper "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection"☆19Updated last year
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆18Updated 3 years ago
- A repository for the generation, visualization, and evaluation of patch based adversarial attacks on the yoloV3 object detection system☆18Updated 4 years ago
- https://arxiv.org/pdf/1906.11897.pdf☆22Updated 4 years ago
- https://idrl-lab.github.io/Full-coverage-camouflage-adversarial-attack/☆52Updated 3 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆111Updated 4 years ago
- Grid Patch Attack for Object Detection☆43Updated 3 years ago
- Adversarial Texture for Fooling Person Detectors in the Physical World☆60Updated last year
- A Paperlist of Adversarial Attack on Object Detection☆126Updated 2 years ago
- ☆40Updated 2 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆24Updated 2 years ago
- REAP: A Large-Scale Realistic Adversarial Patch Benchmark☆31Updated 2 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆135Updated 2 years ago
- ☆27Updated 3 years ago
- Implementation of "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches"☆24Updated 3 years ago
- Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles☆22Updated 2 years ago
- The repository is dedicated to tracking the latest advances in the field of Physical Adversarial Attack (PAA).☆95Updated 5 months ago
- Code for the paper "PAD: Patch-Agnostic Defense against Adversarial Patch Attacks" (CVPR 2024)☆27Updated last year
- A paper list for localized adversarial patch research☆159Updated 4 months ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆51Updated 7 months ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆45Updated 2 years ago
- ☆32Updated 7 months ago
- A Leaderboard for Certifiable Robustness against Adversarial Patch Attacks☆21Updated 2 years ago
- ☆25Updated 4 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago