jeremy313 / SoteriaView external linksLinks
Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"
☆57May 4, 2023Updated 2 years ago
Alternatives and similar repositories for Soteria
Users that are interested in Soteria are comparing it to the libraries listed below
Sorting:
- ☆10Apr 21, 2022Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- ☆36Jan 5, 2022Updated 4 years ago
- ☆15Aug 29, 2023Updated 2 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆313Apr 14, 2023Updated 2 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆200May 7, 2024Updated last year
- wx☆11Aug 14, 2022Updated 3 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆474Apr 17, 2022Updated 3 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- ☆19Mar 6, 2023Updated 2 years ago
- Breaching privacy in federated learning scenarios for vision and text☆313Jan 24, 2026Updated 3 weeks ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated last year
- ☆55Feb 19, 2023Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆86Jun 6, 2020Updated 5 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 4 years ago
- ☆26Dec 14, 2021Updated 4 years ago
- Research into model inversion on SplitNN☆18Feb 20, 2024Updated last year
- Code for Double Blind CollaborativeLearning (DBCL)☆14May 14, 2021Updated 4 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- Official implementation for paper "No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation", I…☆17Jul 26, 2023Updated 2 years ago
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 2 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Apr 3, 2024Updated last year
- Private Adaptive Optimization with Side Information (ICML '22)☆16Jun 23, 2022Updated 3 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 3 years ago
- Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Ho…☆94May 2, 2022Updated 3 years ago
- ☆19Feb 20, 2024Updated last year
- ☆12May 6, 2022Updated 3 years ago
- reveal the vulnerabilities of SplitNN☆31Jun 16, 2022Updated 3 years ago
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆420Jan 9, 2026Updated last month
- FGLA: Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients☆14Dec 20, 2022Updated 3 years ago
- ☆25Jul 12, 2021Updated 4 years ago
- ☆19Feb 22, 2023Updated 2 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago