DevPranjal / mico-first-principlesLinks
Our submission for the Microsoft Membership Inference Competion at SaTML 2023
☆15Updated 2 years ago
Alternatives and similar repositories for mico-first-principles
Users that are interested in mico-first-principles are comparing it to the libraries listed below
Sorting:
- Implementations of data poisoning attacks against neural networks and related defenses.☆94Updated last year
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆158Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆109Updated last year
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆18Updated last year
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆27Updated 3 years ago
- ☆30Updated last year
- ☆58Updated 5 years ago
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆35Updated 3 years ago
- ☆32Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆41Updated last year
- ☆45Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Code for ML Doctor☆90Updated last year
- ☆46Updated last year
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.☆25Updated 4 years ago
- ☆34Updated 3 years ago
- ☆13Updated last year
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆45Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆40Updated last year
- ☆32Updated last year
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- ☆193Updated last year
- ☆26Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆196Updated 3 months ago
- A curated list of academic events on AI Security & Privacy☆162Updated last year