nthu-datalab / On.the.Trade-off.between.Adversarial.and.Backdoor.RobustnessView external linksLinks
Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)
☆17Nov 11, 2020Updated 5 years ago
Alternatives and similar repositories for On.the.Trade-off.between.Adversarial.and.Backdoor.Robustness
Users that are interested in On.the.Trade-off.between.Adversarial.and.Backdoor.Robustness are comparing it to the libraries listed below
Sorting:
- ☆31Oct 7, 2021Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- 深度學習(交大簡仁宗老師)的作業或專題 / Project & Homework of Deep Learning Course(NCTU)☆11Jun 21, 2019Updated 6 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Bullseye Polytope Clean-Label Poisoning Attack☆15Nov 5, 2020Updated 5 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Jun 1, 2022Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Nov 3, 2018Updated 7 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- ☆68Sep 29, 2020Updated 5 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Nov 12, 2024Updated last year
- Source code for ECML-PKDD (2020) paper: FedMAX: Mitigating Activation Divergence for Accurate and Communication-Efficient Federated Learn…☆16Dec 27, 2022Updated 3 years ago
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- A simple implementation of BadNets on MNIST☆33Jul 29, 2019Updated 6 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set …☆14Jun 19, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- ☆19Jun 21, 2021Updated 4 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆193Sep 26, 2022Updated 3 years ago
- ☆50Aug 30, 2024Updated last year
- A pytorch implementation of "Adversarial Examples in the Physical World"☆18Sep 4, 2019Updated 6 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆53Sep 13, 2022Updated 3 years ago
- ☆28Jun 17, 2024Updated last year
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 5 years ago
- ☆25Nov 12, 2022Updated 3 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- Source code for 'Lemna: Explaining deep learning based security applications'.☆24May 15, 2020Updated 5 years ago
- Code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning", Ren et al., NeurIPS'20☆25Jan 10, 2021Updated 5 years ago
- Implementation of Wasserstein adversarial attacks.☆24Jan 2, 2021Updated 5 years ago
- ☆27Dec 15, 2022Updated 3 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆55Mar 24, 2021Updated 4 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago