ebagdasa / backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
☆331Updated last year
Related projects: ⓘ
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆274Updated last month
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆271Updated last month
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆174Updated 3 years ago
- ☆381Updated last month
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆266Updated 4 years ago
- Breaching privacy in federated learning scenarios for vision and text☆260Updated 5 months ago
- The open-sourced Python toolbox for backdoor attacks and defenses.☆434Updated last month
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆117Updated 5 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆165Updated 6 months ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆114Updated 10 months ago
- [NeurIPS 2019] Deep Leakage From Gradients☆398Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆164Updated 3 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆139Updated 3 years ago
- ☆269Updated 2 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆61Updated last year
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆147Updated last year
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆141Updated last year
- Algorithms to recover input data from their gradient signal through a neural network☆260Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆115Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆170Updated 2 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆111Updated last month
- ☆133Updated 4 months ago
- A library for running membership inference attacks against ML models☆137Updated last year
- An awesome list of papers on privacy attacks against machine learning☆552Updated 6 months ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆134Updated 2 years ago
- Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning☆130Updated last month
- A curated list of resources for model inversion attack (MIA).☆115Updated 2 months ago
- Code for ML Doctor☆84Updated last month
- A curated list of academic events on AI Security & Privacy☆128Updated 3 weeks ago
- A unified benchmark problem for data poisoning attacks☆148Updated 11 months ago