USSLab / PoltergeistAttackLinks
☆24Updated 3 years ago
Alternatives and similar repositories for PoltergeistAttack
Users that are interested in PoltergeistAttack are comparing it to the libraries listed below
Sorting:
- MSF-ADV is a novel physical-world adversarial attack method, which can fool the Multi Sensor Fusion (MSF) based autonomous driving (AD) p…☆79Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆19Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- Code for the paper entitled "Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack"…☆37Updated 4 years ago
- ☆20Updated 4 years ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆38Updated 7 years ago
- ☆25Updated 3 years ago
- ☆66Updated 4 years ago
- ☆50Updated 4 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆79Updated 2 years ago
- Trojan Attack on Neural Network☆187Updated 3 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- ☆10Updated 2 months ago
- ☆34Updated 4 months ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆172Updated last week
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆133Updated 2 years ago
- ☆18Updated 3 years ago
- Pytorch implementation of Bit-Flip based adversarial weight Attack (BFA)☆33Updated 4 years ago
- ☆27Updated last year
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- Proof of concept code for DeepSteal (SP'22) Machine Learning model extraction (weight stealing) with memory side channel☆11Updated 2 years ago
- Code for ML Doctor☆91Updated last year
- ☆16Updated 11 months ago
- DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model☆217Updated 6 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆69Updated 3 years ago
- Stealthy Attacks against Robotic Vehicles. Please read the following paper before trying out the attacks.☆14Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago