orientino / lira-pytorchLinks
Likelihood Ratio Attack (LiRA) in PyTorch
☆15Updated 5 months ago
Alternatives and similar repositories for lira-pytorch
Users that are interested in lira-pytorch are comparing it to the libraries listed below
Sorting:
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- ☆46Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 3 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- ☆57Updated 5 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- ☆25Updated 3 years ago
- ☆70Updated 3 years ago
- This repo implements several algorithms for learning with differential privacy.☆109Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆82Updated 2 years ago
- Camouflage poisoning via machine unlearning☆17Updated last month
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- [AAAI'23] Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated Learning☆26Updated 2 years ago
- ☆19Updated 2 years ago
- Certified Removal from Machine Learning Models☆68Updated 4 years ago
- ☆32Updated 11 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆77Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆59Updated 2 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated last year
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆38Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆59Updated 8 months ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆15Updated 3 years ago
- ☆19Updated 11 months ago
- ☆13Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- Code for ML Doctor☆91Updated last year