USSLab / PoltergeistAttack
☆23Updated 2 years ago
Alternatives and similar repositories for PoltergeistAttack:
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆78Updated 3 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆35Updated 3 years ago
- Fault Injection for Autonomous Vehicles☆9Updated 5 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆106Updated 3 years ago
- ☆20Updated 4 years ago
- ☆24Updated 3 years ago
- ☆64Updated 4 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆16Updated 4 years ago
- ☆49Updated 4 years ago
- ☆10Updated 4 months ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆18Updated 4 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated 4 months ago
- ☆17Updated 2 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆76Updated last year
- A repository for the generation, visualization, and evaluation of patch based adversarial attacks on the yoloV3 object detection system☆18Updated 3 years ago
- ☆11Updated last year
- ☆43Updated last year
- https://winterwindwang.github.io/Full-coverage-camouflage-adversarial-attack/☆15Updated 2 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ☆25Updated 2 years ago
- Code for ML Doctor☆86Updated 6 months ago
- Paper sharing in adversary related works☆45Updated last week
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆35Updated 6 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated last year
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 2 years ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆51Updated 3 years ago
- Reward Guided Test Generation for Deep Learning☆20Updated 7 months ago