CSC207-UofT / design-pattern-samples
☆9Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for design-pattern-samples
- Final Project for AM 207, Fall 2021. Review & experimentation with paper "Adversarial Examples Are Not Bugs, They Are Features"☆9Updated 2 years ago
- a Pytorch implementation of the paper "Generating Adversarial Examples with Adversarial Networks" (advGAN).☆260Updated 3 years ago
- Tensorflow implementation of Generating Adversarial Examples with Adversarial Networks☆42Updated 5 years ago
- a pytorch version of AdvGAN for cifar10 dataset☆11Updated 4 years ago
- enhanced adversarial attack algorithm based on Adversarial Transformation Network(ATN)☆11Updated 5 years ago
- ☆48Updated 2 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆154Updated 3 years ago
- Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)☆77Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆150Updated last year
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆11Updated 7 months ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆69Updated 6 years ago
- A pytorch implementation of "Adversarial Examples in the Physical World"☆17Updated 5 years ago
- Implementation of https://arxiv.org/abs/1610.08401 for the CS-E4070 - Special Course in Machine Learning and Data Science: Advanced Topic…☆59Updated 4 years ago
- ☆9Updated 3 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆53Updated 5 years ago
- Implemented CURE algorithm from robustness via curvature regularization and vice versa☆29Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- Creating and defending against adversarial examples☆42Updated 5 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆40Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆30Updated 4 years ago
- Blackbox attacks for deep neural network models☆70Updated 6 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆144Updated 4 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆54Updated 5 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆122Updated 4 years ago
- A PyTorch implementation of universal adversarial perturbation (UAP) which is more easy to understand and implement.☆54Updated 2 years ago
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning☆81Updated 8 months ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago
- My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-…☆69Updated 6 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago