USSLab / PoltergeistAttack
☆24Updated 2 years ago
Alternatives and similar repositories for PoltergeistAttack
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
Sorting:
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 3 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆35Updated 3 years ago
- ☆20Updated 4 years ago
- Fault Injection for Autonomous Vehicles☆9Updated 5 years ago
- An awesome & curated list of autonomous driving security papers☆40Updated last week
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- ☆18Updated 2 years ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆19Updated 4 years ago
- ☆49Updated 4 years ago
- ☆66Updated 4 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆16Updated 5 years ago
- ☆25Updated 5 years ago
- ☆24Updated 3 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- TAOISM: A TEE-based Confidential Heterogeneous Deployment Framework for DNN Models☆35Updated last year
- ☆32Updated last month
- Code for ML Doctor☆87Updated 9 months ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- ☆10Updated 6 months ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated last year
- Adversarial Driving v.s. Autonomous Driving.☆21Updated last year
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆22Updated last year
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆52Updated last week
- Code for ISSTA'21 paper 'Attack as Defense: Characterizing Adversarial Examples using Robustness'.☆11Updated 3 years ago
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆146Updated 2 months ago
- Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models …☆22Updated 9 months ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆37Updated 7 years ago
- Repo for USENIX security 2024 paper "On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures" https://arxi…☆17Updated 8 months ago
- Library for training globally-robust neural networks.☆28Updated last year