teobaluta / etioLinks
Causal Reasoning for Membership Inference Attacks
☆11Updated 3 years ago
Alternatives and similar repositories for etio
Users that are interested in etio are comparing it to the libraries listed below
Sorting:
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 5 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- ☆19Updated 2 years ago
- ☆23Updated 2 years ago
- ☆24Updated 3 years ago
- ☆21Updated 4 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Updated 4 years ago
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆15Updated 9 months ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- ☆19Updated 2 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆27Updated last year
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆43Updated 2 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆20Updated 3 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- ☆26Updated 6 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Updated 5 years ago
- ☆26Updated 6 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Updated 6 years ago
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆26Updated 4 years ago
- Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)☆143Updated 2 months ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- ☆43Updated 2 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 6 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Updated 4 years ago