minxingzhang / MIARS
Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)
☆17Updated last month
Related projects ⓘ
Alternatives and complementary repositories for MIARS
- Model Poisoning Attack to Federated Recommendation☆31Updated 2 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆18Updated 2 years ago
- Source code of FedAttack.☆11Updated 2 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆13Updated 4 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆12Updated last year
- Pytorch implementation of backdoor unlearning.☆16Updated 2 years ago
- ☆38Updated 3 years ago
- ☆19Updated last year
- ☆25Updated 5 years ago
- ☆65Updated 2 years ago
- Learning from history for Byzantine Robustness☆21Updated 3 years ago
- ☆17Updated 3 years ago
- verifying machine unlearning by backdooring☆18Updated last year
- ☆23Updated 3 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆61Updated 4 years ago
- Adversarial attacks and defenses against federated learning.☆15Updated last year
- ☆23Updated 2 years ago
- This repository collects the latest research progress of Privacy-Preserving Recommender Systems after 2018.☆29Updated 3 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆33Updated last year
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆37Updated 3 years ago
- ☆45Updated 5 years ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆22Updated last month
- KNN Defense Against Clean Label Poisoning Attacks☆11Updated 3 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆56Updated last month
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 4 years ago
- ☆17Updated 3 years ago
- ☆10Updated 2 years ago