AminJun / lisaLinks
LISA Traffic Signs Dataset for Pytorch. For Classification. 32x32 images. I use this to reproduce the Activation Clustering Results.
☆20Updated 4 years ago
Alternatives and similar repositories for lisa
Users that are interested in lisa are comparing it to the libraries listed below
Sorting:
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆169Updated 4 years ago
- Artifacts for SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations☆27Updated 3 years ago
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆134Updated 4 years ago
- Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models …☆23Updated last year
- Library containing PyTorch implementations of various adversarial attacks and resources☆161Updated 2 weeks ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆32Updated 4 years ago
- Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]☆41Updated 7 months ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆108Updated last year
- This repository contains the PyTorch implementation of Zeroth Order Optimization Based Adversarial Black Box Attack (https://arxiv.org/ab…☆42Updated 2 years ago
- Ensemble Adversarial Black-Box Attacks against Deep Learning Systems Trained by MNIST, USPS and GTSRB Datasets☆32Updated 5 years ago
- A paper list for localized adversarial patch research☆156Updated last month
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆26Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated last year
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- Creating and defending against adversarial examples☆41Updated 6 years ago
- Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)☆99Updated 4 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 3 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆21Updated 4 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Updated 3 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆58Updated 6 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆42Updated 2 years ago
- ☆21Updated 2 years ago
- A unified benchmark problem for data poisoning attacks☆157Updated last year
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆58Updated 6 years ago
- Pytorch implementation of Adversarial Patch on ImageNet (arXiv: https://arxiv.org/abs/1712.09665)☆64Updated 5 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Updated 3 years ago
- Implementation of the paper "An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models"☆17Updated 5 years ago