Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"
☆33Feb 28, 2022Updated 4 years ago
Alternatives and similar repositories for GRNN
Users that are interested in GRNN are comparing it to the libraries listed below
Sorting:
- wx☆11Aug 14, 2022Updated 3 years ago
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- 发布微信公众号的同时把文章上链的油猴/暴力猴脚本。☆27Dec 25, 2022Updated 3 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Differentially Private Federated Learning: A Client Level Perspective☆12Jul 3, 2019Updated 6 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- AutoML, Privacy Preserving, Federated Learning☆26Jun 8, 2023Updated 2 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆317Apr 14, 2023Updated 2 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆137Dec 8, 2022Updated 3 years ago
- ☆15Aug 29, 2023Updated 2 years ago
- Federated Learning in Network Intrusion Detection☆14Feb 22, 2023Updated 3 years ago
- Breaching privacy in federated learning scenarios for vision and text☆316Jan 24, 2026Updated last month
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- vector quantization for stochastic gradient descent.☆35May 12, 2020Updated 5 years ago
- 联邦学习可视化☆16Apr 25, 2021Updated 4 years ago
- PyTorch implementation of Joint Privacy Enhancement and Quantization in Federated Learning (IEEE TSP 2023, IEEE ICASSP 2023, IEEE ISIT 20…☆18Oct 28, 2025Updated 4 months ago
- ☆19Feb 20, 2024Updated 2 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- [AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models☆12Dec 5, 2024Updated last year
- ☆19Mar 6, 2023Updated 3 years ago
- ☆20Jun 1, 2022Updated 3 years ago
- ☆14Dec 8, 2022Updated 3 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- This is an implementation for training neural diffusion distance☆10Jan 31, 2020Updated 6 years ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆46Jul 18, 2025Updated 8 months ago
- Single Image Backdoor Inversion via Robust Smoothed Classifiers☆17Jul 18, 2023Updated 2 years ago
- [ICML'25] MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents☆24Jul 31, 2025Updated 7 months ago
- Official Implementation for CVPR 2025 paper Instant Adversarial Purification with Adversarial Consistency Distillation.☆15Dec 19, 2025Updated 3 months ago
- ☆26Dec 14, 2021Updated 4 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆65May 22, 2020Updated 5 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- ☆22Sep 17, 2024Updated last year
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Apr 19, 2021Updated 4 years ago