[NeurIPS 2019] Deep Leakage From Gradients
☆476Apr 17, 2022Updated 3 years ago
Alternatives and similar repositories for dlg
Users that are interested in dlg are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆317Apr 14, 2023Updated 2 years ago
- paper code☆28Oct 5, 2020Updated 5 years ago
- Breaching privacy in federated learning scenarios for vision and text☆316Jan 24, 2026Updated 2 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆202May 7, 2024Updated last year
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Jan 18, 2023Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- An awesome list of papers on privacy attacks against machine learning☆633Mar 18, 2024Updated 2 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- ☆36Jan 5, 2022Updated 4 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- ☆46Nov 10, 2019Updated 6 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- ☆48Dec 29, 2021Updated 4 years ago
- Implementation of dp-based federated learning framework using PyTorch☆317Jan 3, 2026Updated 2 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 3 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆703Apr 26, 2025Updated 11 months ago
- FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai☆2,005Sep 3, 2022Updated 3 years ago
- A PyTorch Implementation of Federated Learning☆1,509Jul 25, 2024Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆151Aug 6, 2022Updated 3 years ago
- Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data☆1,434May 7, 2024Updated last year
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆379Feb 5, 2023Updated 3 years ago
- Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shak…☆423Oct 26, 2024Updated last year
- Simulate a federated setting and run differentially private federated learning.☆388Mar 7, 2025Updated last year
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Leaf: A Benchmark for Federated Settings☆903Mar 24, 2023Updated 3 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆83Nov 22, 2021Updated 4 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- Code repo for the UAI 2023 paper "Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning".☆16Jun 15, 2024Updated last year
- autodp: A flexible and easy-to-use package for differential privacy☆279Dec 5, 2023Updated 2 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Dec 8, 2022Updated 3 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 4 years ago