USSLab / PoltergeistAttackLinks
☆25Updated 3 years ago
Alternatives and similar repositories for PoltergeistAttack
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
Sorting:
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆80Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆19Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- ☆20Updated 5 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆38Updated 4 years ago
- ☆50Updated 4 years ago
- An awesome & curated list of autonomous driving security papers☆50Updated 2 weeks ago
- ☆39Updated 6 months ago
- Trojan Attack on Neural Network☆188Updated 3 years ago
- ☆27Updated last year
- ☆25Updated 3 years ago
- ☆66Updated 5 years ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆38Updated 7 years ago
- DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model☆222Updated 6 years ago
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆181Updated last month
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆82Updated 2 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago
- ☆10Updated 3 months ago
- Code for ML Doctor☆90Updated last year
- Robustness benchmark for DNN models.☆66Updated 3 years ago
- runs several layers of a deep learning model in TrustZone☆91Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 6 years ago
- ☆28Updated 2 years ago
- Modular Adversarial Robustness Toolkit☆20Updated 2 months ago
- Stealthy Attacks against Robotic Vehicles. Please read the following paper before trying out the attacks.☆14Updated 2 years ago
- Library for training globally-robust neural networks.☆29Updated 2 months ago
- Privacy-preserving Federated Learning with Trusted Execution Environments☆72Updated 3 months ago
- ☆18Updated 3 years ago