USSLab / PoltergeistAttack
☆22Updated 2 years ago
Alternatives and similar repositories for PoltergeistAttack:
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆78Updated 3 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- ☆20Updated 4 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆35Updated 3 years ago
- An awesome & curated list of autonomous driving security papers☆21Updated last month
- ☆64Updated 4 years ago
- ☆24Updated 3 years ago
- Trojan Attack on Neural Network☆183Updated 2 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆106Updated 3 years ago
- ☆17Updated 2 years ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆35Updated 6 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆16Updated 4 years ago
- [arXiv'18] Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks☆19Updated 4 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ☆49Updated 4 years ago
- ☆21Updated last year
- ☆83Updated last year
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆64Updated 2 years ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆18Updated 4 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆75Updated last year
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆97Updated 6 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆32Updated 4 years ago
- Robustness benchmark for DNN models.☆66Updated 2 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆133Updated last year
- Paper sharing in adversary related works☆45Updated last week
- TAOISM: A TEE-based Confidential Heterogeneous Deployment Framework for DNN Models☆33Updated 9 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year