dgl-prc / m_testing_adversatial_sample
☆24Updated 4 years ago
Alternatives and similar repositories for m_testing_adversatial_sample:
Users that are interested in m_testing_adversatial_sample are comparing it to the libraries listed below
- Code release for RobOT (ICSE'21)☆14Updated 2 years ago
- ☆17Updated 3 years ago
- ☆24Updated 3 years ago
- ☆19Updated 5 years ago
- Code release for DeepJudge (S&P'22)☆50Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated 9 months ago
- White-box Fairness Testing through Adversarial Sampling☆13Updated 3 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆50Updated 2 years ago
- ☆17Updated 2 years ago
- TrojanZoo is a universal pytorch platform for conducting security researches (especially for backdoor attacks/defenses) for image classif…☆19Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆54Updated 3 months ago
- ☆14Updated 5 years ago
- ☆27Updated 2 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated 3 months ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆13Updated last month
- ☆14Updated last year
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 6 years ago
- DNN Coverage Based Testing Study☆16Updated 4 years ago
- ☆10Updated 2 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- DLFuzz: An Efficient Fuzzing Testing Framework of Deep Learning Systems☆52Updated 6 years ago
- DeepInspect code release☆11Updated 5 years ago
- [IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆21Updated last month
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated last month
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆57Updated 2 years ago
- ☆79Updated 3 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated last year
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 2 years ago
- ☆64Updated 4 years ago