DeRafael / CAFELinks
☆22Updated 3 years ago
Alternatives and similar repositories for CAFE
Users that are interested in CAFE are comparing it to the libraries listed below
Sorting:
- ☆70Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 3 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Updated 2 years ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆24Updated 11 months ago
- ☆54Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 4 years ago
- This repo implements several algorithms for learning with differential privacy.☆109Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 2 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆24Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆63Updated 10 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆58Updated 2 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆59Updated 8 months ago
- ☆30Updated 5 years ago
- ☆37Updated 3 years ago
- ☆45Updated 5 years ago
- Code for ML Doctor☆91Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆60Updated 2 years ago
- Adversarial attacks and defenses against federated learning.☆19Updated 2 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆154Updated 4 years ago
- ☆39Updated 4 years ago
- ☆26Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆58Updated 6 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆20Updated last year
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆54Updated 6 years ago
- ☆32Updated 11 months ago
- ☆25Updated 4 years ago