ebagdasa / backdoors101Links
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
☆372Updated 2 years ago
Alternatives and similar repositories for backdoors101
Users that are interested in backdoors101 are comparing it to the libraries listed below
Sorting:
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆306Updated last year
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆301Updated 2 months ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆304Updated 5 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆127Updated last year
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆198Updated 4 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆188Updated 3 years ago
- A library for running membership inference attacks against ML models☆150Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆200Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆75Updated 2 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Updated 3 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆200Updated 7 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆151Updated 3 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆37Updated last month
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated 2 years ago
- ☆54Updated 2 years ago
- ☆70Updated 3 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆64Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Updated 3 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59Updated 6 years ago
- ☆359Updated 4 months ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Updated 2 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Updated 2 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆462Updated 3 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆305Updated 2 years ago
- Code for ML Doctor☆91Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆61Updated 10 months ago
- An awesome list of papers on privacy attacks against machine learning☆627Updated last year
- ☆34Updated last year
- A sybil-resilient distributed learning protocol.☆104Updated last month
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 4 years ago