Hazelsuko07 / TextHideLinks
TextHide: Tackling Data Privacy in Language Understanding Tasks
☆31Updated 4 years ago
Alternatives and similar repositories for TextHide
Users that are interested in TextHide are comparing it to the libraries listed below
Sorting:
- Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)☆143Updated 4 months ago
- ☆21Updated 4 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 5 years ago
- ☆27Updated 3 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33Updated 4 years ago
- ☆25Updated 5 years ago
- A codebase that makes differentially private training of transformers easy.☆182Updated 3 years ago
- ☆31Updated 4 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Updated 2 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Updated 7 years ago
- ☆23Updated 3 years ago
- Training data extraction on GPT-2☆195Updated 2 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Updated last year
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆47Updated 6 years ago
- ☆80Updated 3 years ago
- Code for paper: "Spinning Language Models: Risks of Propaganda-as-a-Service and Countermeasures"☆21Updated 3 years ago
- Code for Findings of ACL 2021 "Differential Privacy for Text Analytics via Natural Text Sanitization"☆31Updated 3 years ago
- Pytorch implementation of paper Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (https://arxiv.org/abs/16…☆45Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 5 years ago
- Code for "Imitation Attacks and Defenses for Black-box Machine Translations Systems"☆35Updated 5 years ago
- ☆14Updated 5 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆76Updated last year
- ☆78Updated 3 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- ☆19Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- This repo implements several algorithms for learning with differential privacy.☆111Updated 3 years ago
- Causal Reasoning for Membership Inference Attacks☆11Updated 3 years ago
- Bad Characters: Imperceptible NLP Attacks☆35Updated last year