yuji-roh / fr-trainLinks
FR-Train: A Mutual Information-Based Approach to Fair and Robust Training (ICML 2020)
☆13Updated 4 years ago
Alternatives and similar repositories for fr-train
Users that are interested in fr-train are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- ☆22Updated 6 years ago
- ☆11Updated 4 years ago
- ☆47Updated 3 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demog…☆20Updated 4 years ago
- ☆58Updated 5 years ago
- ☆37Updated 2 years ago
- ☆196Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆17Updated 3 years ago
- Papers and online resources related to machine learning fairness☆75Updated 2 years ago
- ☆13Updated 2 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Updated 4 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- Methods for removing learned data from neural nets and evaluation of those methods☆38Updated 5 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆23Updated 3 years ago
- ☆32Updated last year
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Updated 2 years ago
- ☆32Updated 2 years ago
- 💱 A curated list of data valuation (DV) to design your next data marketplace☆135Updated 10 months ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆211Updated 6 months ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆161Updated 2 years ago
- ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Gold…☆25Updated 2 years ago
- Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples…☆98Updated 3 years ago
- [NeurIPS 2020] code for "Boundary thickness and robustness in learning models"☆20Updated 5 years ago
- A curated list of trustworthy deep learning papers. Daily updating...☆377Updated 2 weeks ago
- ☆58Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆39Updated 4 years ago