☆36Jan 5, 2022Updated 4 years ago
Alternatives and similar repositories for CPL_attack
Users that are interested in CPL_attack are comparing it to the libraries listed below
Sorting:
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 4 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- THU-AIR 联邦学习隐私与安全☆13Jun 26, 2023Updated 2 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆314Apr 14, 2023Updated 2 years ago
- ☆15Aug 29, 2023Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 4 years ago
- ☆16Apr 16, 2019Updated 6 years ago
- PyTorch implementation of Joint Privacy Enhancement and Quantization in Federated Learning (IEEE TSP 2023, IEEE ICASSP 2023, IEEE ISIT 20…☆18Oct 28, 2025Updated 4 months ago
- Official repo of the paper Deep Regression Unlearning accepted in ICML 2023☆14Jun 14, 2023Updated 2 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆23Jan 29, 2020Updated 6 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 4 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- Breaching privacy in federated learning scenarios for vision and text☆314Jan 24, 2026Updated last month
- ☆12Dec 26, 2024Updated last year
- ☆14May 25, 2022Updated 3 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 3 years ago
- Differentially Private Federated Learning: A Client Level Perspective☆12Jul 3, 2019Updated 6 years ago
- ☆14Dec 8, 2022Updated 3 years ago
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆422Jan 9, 2026Updated last month
- Official Pytorch implementation of IJCAI'21 paper "GraphMI: Extracting Private Graph Data from Graph Neural Networks"☆13Nov 19, 2021Updated 4 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆15Mar 24, 2022Updated 3 years ago
- ☆34Oct 12, 2022Updated 3 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆200May 7, 2024Updated last year
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- Code for paper "Byzantine-Resilient Distributed Finite-Sum Optimization over Networks"☆18Nov 5, 2020Updated 5 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆474Apr 17, 2022Updated 3 years ago
- Privacy-preserving Federated Learning with Trusted Execution Environments☆74Jul 10, 2025Updated 7 months ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- ☆19Feb 20, 2024Updated 2 years ago