☆36Jan 5, 2022Updated 4 years ago
Alternatives and similar repositories for CPL_attack
Users that are interested in CPL_attack are comparing it to the libraries listed below
Sorting:
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 4 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆317Apr 14, 2023Updated 2 years ago
- THU-AIR 联邦学习隐私与安全☆13Jun 26, 2023Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- ☆15Aug 29, 2023Updated 2 years ago
- PyTorch implementation of Joint Privacy Enhancement and Quantization in Federated Learning (IEEE TSP 2023, IEEE ICASSP 2023, IEEE ISIT 20…☆18Oct 28, 2025Updated 4 months ago
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- ☆12Dec 26, 2024Updated last year
- ☆16Apr 16, 2019Updated 6 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Breaching privacy in federated learning scenarios for vision and text☆316Jan 24, 2026Updated last month
- Differentially Private Federated Learning: A Client Level Perspective☆12Jul 3, 2019Updated 6 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆137Dec 8, 2022Updated 3 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆476Apr 17, 2022Updated 3 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 3 years ago
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆422Jan 9, 2026Updated 2 months ago
- Adaptive Resource-Aware Split-Learning, a framework for efficient model training in IoT systems☆15Jul 23, 2023Updated 2 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆23Jan 29, 2020Updated 6 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- ☆34Oct 12, 2022Updated 3 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆46Jun 12, 2023Updated 2 years ago
- Homomorphic Encryption and Federated Learning based Privacy-Preserving☆71Jun 27, 2023Updated 2 years ago
- TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations☆20Dec 27, 2020Updated 5 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆37Oct 3, 2022Updated 3 years ago
- A simulation showcasing federated learning with homomorphic encryption☆10Sep 24, 2023Updated 2 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆202May 7, 2024Updated last year
- ☆13May 29, 2023Updated 2 years ago
- Flower framework for Federated Learning, with Fully Homomorphic Encryption integrated☆13Jun 3, 2024Updated last year
- Privacy-preserving Federated Learning with Trusted Execution Environments☆74Jul 10, 2025Updated 8 months ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆19Apr 27, 2022Updated 3 years ago