A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".
☆62Oct 24, 2022Updated 3 years ago
Alternatives and similar repositories for GGL
Users that are interested in GGL are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- ☆36Jan 5, 2022Updated 4 years ago
- Breaching privacy in federated learning scenarios for vision and text☆316Jan 24, 2026Updated 2 months ago
- Algorithms to recover input data from their gradient signal through a neural network☆317Apr 14, 2023Updated 2 years ago
- From Gradient Leakage to Adversarial Attacks in Federated Learning☆16Sep 21, 2021Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆15Aug 29, 2023Updated 2 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆476Apr 17, 2022Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- AutoML, Privacy Preserving, Federated Learning☆26Jun 8, 2023Updated 2 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Jul 7, 2022Updated 3 years ago
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- ☆12Dec 26, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Simplified Implementation of FedPAC☆62Nov 30, 2023Updated 2 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆202May 7, 2024Updated last year
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆423Jan 9, 2026Updated 2 months ago
- FGLA: Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients☆14Mar 17, 2026Updated last week
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆52Jul 13, 2024Updated last year
- ☆26Dec 14, 2021Updated 4 years ago
- Differentially Private Federated Learning on Heterogeneous Data☆74Feb 22, 2022Updated 4 years ago
- ☆48Dec 29, 2021Updated 4 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆16Sep 4, 2024Updated last year
- Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradi…☆31Jun 9, 2021Updated 4 years ago
- ☆20Jun 1, 2022Updated 3 years ago
- This repository contains the implementation of DPMLBench: Holistic Evaluation of Differentially Private Machine Learning☆11Nov 24, 2023Updated 2 years ago
- Training Federated GANs with Theoretical Guarantees: AUniversal Aggregation Approach☆17Jan 18, 2021Updated 5 years ago
- Plato: A Research Framework for Federated Learning☆393Updated this week
- ☆12May 27, 2022Updated 3 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code for "Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions"☆14Sep 3, 2023Updated 2 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 3 years ago
- ☆36Dec 23, 2025Updated 3 months ago
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆47Sep 10, 2025Updated 6 months ago
- Paper List for Gradient Inversion Attacks in Federated Learning [IEEE TPAMI 2026]☆31Mar 20, 2026Updated last week
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆15Mar 24, 2022Updated 4 years ago