KAI-YUE / rogLinks
☆15Updated 2 years ago
Alternatives and similar repositories for rog
Users that are interested in rog are comparing it to the libraries listed below
Sorting:
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Updated 2 years ago
- ☆55Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Updated 3 years ago
- ☆36Updated 4 years ago
- Code for ML Doctor☆92Updated last year
- ☆73Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- ☆37Updated 4 years ago
- Privacy attacks on Split Learning☆43Updated 4 years ago
- ☆46Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆67Updated 4 years ago
- ☆30Updated 2 years ago
- ☆46Updated 6 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆66Updated last year
- ☆25Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆84Updated 2 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Updated 2 years ago
- ☆51Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Updated 2 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Updated 4 years ago
- ☆24Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆82Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Updated 4 months ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Updated 4 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Updated 3 years ago
- reveal the vulnerabilities of SplitNN☆31Updated 3 years ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆25Updated last year
- Learning from history for Byzantine Robustness☆25Updated 4 years ago