USSLab / PoltergeistAttackLinks
☆25Updated 3 years ago
Alternatives and similar repositories for PoltergeistAttack
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
Sorting:
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆80Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 4 years ago
- Trojan Attack on Neural Network☆188Updated 3 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆38Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆28Updated 3 years ago
- Proof of concept code for DeepSteal (SP'22) Machine Learning model extraction (weight stealing) with memory side channel☆11Updated 2 years ago
- ☆50Updated 4 years ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆19Updated 4 years ago
- ☆10Updated 4 months ago
- ☆20Updated 5 years ago
- DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model☆222Updated 6 years ago
- ☆40Updated 6 months ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago
- ☆66Updated 5 years ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆38Updated 7 years ago
- ☆27Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆82Updated 2 years ago
- ☆25Updated 4 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆110Updated 4 years ago
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆192Updated last week
- runs several layers of a deep learning model in TrustZone☆90Updated last year
- Privacy-preserving Federated Learning with Trusted Execution Environments☆72Updated 3 months ago
- Code for ML Doctor☆91Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 6 years ago
- [arXiv'18] Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks☆20Updated 5 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆304Updated 5 years ago
- Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models …☆23Updated last year
- Stealthy Attacks against Robotic Vehicles. Please read the following paper before trying out the attacks.☆15Updated 3 years ago
- Robustness benchmark for DNN models.☆66Updated 3 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆134Updated 2 years ago