CSC207-UofT / design-pattern-samplesLinks
☆10Updated 10 months ago
Alternatives and similar repositories for design-pattern-samples
Users that are interested in design-pattern-samples are comparing it to the libraries listed below
Sorting:
- Course Notes for CSC207☆178Updated this week
- A unified benchmark problem for data poisoning attacks☆157Updated last year
- Contains implementation of denoising algorithms.☆11Updated 5 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆18Updated last year
- Final Project for AM 207, Fall 2021. Review & experimentation with paper "Adversarial Examples Are Not Bugs, They Are Features"☆10Updated 3 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated last year
- Implemented CURE algorithm from robustness via curvature regularization and vice versa☆31Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆32Updated 4 years ago
- Implementation of https://arxiv.org/abs/1610.08401 for the CS-E4070 - Special Course in Machine Learning and Data Science: Advanced Topic…☆64Updated 5 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆37Updated 4 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆58Updated 6 years ago
- ☆54Updated 3 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆169Updated 4 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- ☆26Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- Code and experiments for the adversarial detection paper☆21Updated 4 years ago
- [PyTorch Implementation] Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆16Updated 4 years ago
- Code for "Label-Consistent Backdoor Attacks"☆58Updated 4 years ago
- Provable adversarial robustness at ImageNet scale☆396Updated 6 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- Towards Efficient and Effective Adversarial Training, NeurIPS 2021☆17Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- ☆11Updated 5 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆108Updated last year
- Code for FAB-attack☆33Updated 5 years ago
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆20Updated 8 months ago
- Code for "Neural Tangent Generalization Attacks" (ICML 2021)☆41Updated 4 years ago
- ☆17Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year