McGill-DMaS / Privacy-DiffGen
Differentially private data release for data mining [SIGKDD 2011] - convert a relational data set into a differentially-private version while maintaining its capability for data mining
☆16Updated 9 years ago
Alternatives and similar repositories for Privacy-DiffGen:
Users that are interested in Privacy-DiffGen are comparing it to the libraries listed below
- Implementation of the peer-to-peer simulation used for the experimental evaluation of the Heterogeneous Differential Privacy paper.☆10Updated 4 years ago
- WAFFLE: Watermarking in Federated Learning☆17Updated last year
- Cost-Aware Robust Tree Ensembles for Security Applications (Usenix Security'21) https://arxiv.org/pdf/1912.01149.pdf☆18Updated 3 years ago
- A Privacy Preserving Data Mining Platform☆46Updated 12 years ago
- ☆55Updated 5 years ago
- ☆23Updated last year
- ☆10Updated last year
- This work combines differential privacy and multi-party computation protocol to achieve distributed machine learning.☆26Updated 4 years ago
- ☆19Updated 3 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆101Updated 5 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆53Updated 3 years ago
- A library for adversarial classifier evasion☆40Updated 10 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- ☆23Updated 6 years ago
- ☆9Updated 3 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- On Training Robust PDF Malware Classifiers (Usenix Security'20) https://arxiv.org/abs/1904.03542☆29Updated 3 years ago
- PDF Malware Parser☆20Updated 8 years ago
- ☆43Updated 3 years ago
- ☆25Updated 2 years ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆18Updated 2 years ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆57Updated 2 years ago
- privacy preserving deep learning☆15Updated 7 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 6 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 4 years ago
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆20Updated 2 months ago
- ☆26Updated 6 years ago
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation☆16Updated 4 years ago
- ☆18Updated 9 months ago