chaoge123456 / MLsecurity
机器学习安全相关论文、代码
☆41Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for MLsecurity
- 对抗样本(Adversarial Examples)和投毒攻击(Poisoning Attacks)相关资料☆107Updated 5 years ago
- DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model☆209Updated 5 years ago
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆55Updated 5 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆49Updated last week
- Tensorflow implementation of Generating Adversarial Examples with Adversarial Networks☆42Updated 5 years ago
- 对抗样本☆263Updated last year
- ☆79Updated 5 years ago
- a pytorch version of AdvGAN for cifar10 dataset☆11Updated 4 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆91Updated 2 years ago
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆122Updated 3 years ago
- A novel data-free model stealing method based on GAN☆123Updated 2 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆47Updated 2 years ago
- ☆91Updated 4 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆40Updated last year
- Based on Pytorch, the Adversarial Attack algorithm DeepFool, targeting the Mnist data set and ResNet18 network☆16Updated 4 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆157Updated 2 years ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆92Updated 3 years ago
- Using FGSM, I-FGSM and MI-FGSM to generate and evaluate adversarial samples.☆12Updated 5 years ago
- ☆79Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- An adversarial attack on object detectors☆140Updated 3 years ago
- Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)☆87Updated last year
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆267Updated 4 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆16Updated 4 years ago
- A pytorch implementation of "Adversarial Examples in the Physical World"☆17Updated 5 years ago
- 使用pytorch实现FGSM☆29Updated 3 years ago
- Code for attacking state-of-the-art face-recognition system from our paper: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "Accessorize …☆57Updated 5 years ago
- A Implementation of IJCAI-19(Transferable Adversarial Attacks for Image and Video Object Detection)☆90Updated 5 years ago
- a Pytorch implementation of the paper "Generating Adversarial Examples with Adversarial Networks" (advGAN).☆264Updated 3 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆117Updated last year