minxingzhang / MIARSView external linksLinks
Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)
☆20Oct 8, 2024Updated last year
Alternatives and similar repositories for MIARS
Users that are interested in MIARS are comparing it to the libraries listed below
Sorting:
- ☆14Apr 11, 2021Updated 4 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆50Jul 2, 2025Updated 7 months ago
- ☆22Sep 17, 2024Updated last year
- An unofficial pyotrch implementation of "ML-Leaks:Model and Data Independent Membership Inference Attacks and Defenses on ML Models"☆11Dec 23, 2023Updated 2 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- ☆10Dec 30, 2021Updated 4 years ago
- ☆12Dec 9, 2020Updated 5 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Aug 29, 2022Updated 3 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 4 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- Processed datasets that we have used in our research☆14Apr 30, 2020Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Material supporting the tutorial "Pursuing Privacy in Recommender Systems: The View of Users and Researchers from Regulations to Applicat…☆18Jul 12, 2023Updated 2 years ago
- Official implementation of the papers "User-controlled federated matrix factorization for recommender systems" and "FedeRank: User Contro…☆18Jul 28, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- ☆42Nov 24, 2023Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆18Apr 27, 2022Updated 3 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Jul 12, 2022Updated 3 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆46Apr 22, 2022Updated 3 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Dec 8, 2022Updated 3 years ago
- ☆29May 8, 2023Updated 2 years ago
- This is the official PyTorch implementation for the paper: "EulerNet: Adaptive Feature Interaction Learning via Euler’s Formula for CTR P…☆29Jul 31, 2024Updated last year
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Aug 3, 2023Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- This repository collects the latest research progress of Privacy-Preserving Recommender Systems after 2018.☆30Nov 4, 2021Updated 4 years ago
- ☆31Oct 7, 2021Updated 4 years ago
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆31Oct 15, 2017Updated 8 years ago
- Adversarial attack on a CNN trained on MNIST dataset using Targeted I-FGSM and Targeted MI-FGM☆11Feb 17, 2018Updated 7 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆38Nov 26, 2020Updated 5 years ago
- G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)☆29Jan 11, 2022Updated 4 years ago
- ☆32Sep 2, 2024Updated last year