dimun / pate_torch
Implementation of PATE technique described in https://arxiv.org/pdf/1610.05755.pdf
☆9Updated 5 years ago
Alternatives and similar repositories for pate_torch
Users that are interested in pate_torch are comparing it to the libraries listed below
Sorting:
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- Improved DP-SGD for optimizing☆18Updated 6 years ago
- Integration of SplitNN for vertically partitioned data with OpenMined's PySyft☆27Updated 4 years ago
- This repository contains the jupyter notebooks for the custom-built DenseNet Model build on Tiny ImageNet dataset☆19Updated 4 years ago
- ☆12Updated 5 years ago
- reveal the vulnerabilities of SplitNN☆31Updated 2 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Updated 4 years ago
- Pytorch implementation of paper Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (https://arxiv.org/abs/16…☆44Updated 3 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- A summay of existing works on vertical federated/split learning☆15Updated 3 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Updated 6 years ago
- Practical One-Shot Federated Learning for Cross-Silo Setting☆42Updated 3 years ago
- This course introduced me to three cutting-edge technologies for privacy-preserving AI: Federated Learning, Differential Privacy, and Enc…☆11Updated 5 years ago
- Differentially Private Federated Learning: A Client Level Perspective☆12Updated 5 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆84Updated 4 years ago
- ☆26Updated 6 years ago
- privacy preserving deep learning☆15Updated 7 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 9 months ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- Adversarial attacks and defenses against federated learning.☆17Updated last year
- The implementation of "Towards Faster and Better Federated Learning: A Feature Fusion Approach" (ICIP 2019)☆36Updated 5 years ago
- ☆19Updated 2 years ago
- ☆14Updated last year
- ☆16Updated last year
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Updated last year
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆33Updated 4 years ago
- Implementation of Federated Learning algorithms such as FedAvg, FedAvgM, SCAFFOLD, FedOpt, Mime using PyTorch.☆12Updated 2 years ago