dgl-prc / m_testing_adversatial_sampleLinks
☆27Updated 5 years ago
Alternatives and similar repositories for m_testing_adversatial_sample
Users that are interested in m_testing_adversatial_sample are comparing it to the libraries listed below
Sorting:
- Code release for RobOT (ICSE'21)☆15Updated 2 years ago
- ☆68Updated 5 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆73Updated 7 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Updated last year
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- ☆149Updated last year
- ☆25Updated 4 years ago
- ☆19Updated 3 years ago
- TrojanZoo is a universal pytorch platform for conducting security researches (especially for backdoor attacks/defenses) for image classif…☆20Updated 4 years ago
- ☆19Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 7 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆101Updated 7 years ago
- ☆53Updated 3 years ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 10 months ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆36Updated last year
- ☆15Updated 5 years ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆19Updated last month
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆124Updated 3 years ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- ☆99Updated 5 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 3 years ago
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Updated 6 years ago
- Trojan Attack on Neural Network☆189Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated last year
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- ☆84Updated 4 years ago