usc-sail / fed-ser-leakage
☆11Updated last year
Related projects: ⓘ
- Pytorch implementation of Backdoor Attack against Speaker Verification☆23Updated last year
- KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems☆12Updated 11 months ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆55Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆29Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated last year
- Federated Few-shot Learning for Mobile NLP. Conditionally accepted by MobiCom'23.☆15Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆25Updated 2 years ago
- Official repo of the paper Deep Regression Unlearning accepted in ICML 2023☆13Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆14Updated 7 months ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆12Updated 3 years ago
- Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"☆10Updated 6 months ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆27Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- ☆20Updated last year
- verifying machine unlearning by backdooring☆18Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆13Updated last year
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆21Updated 11 months ago
- Adversarial attacks and defenses against federated learning.☆14Updated last year
- Official Repository for ResSFL (accepted by CVPR '22)☆21Updated 2 years ago
- Membership inference against Federated learning.☆7Updated 3 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆11Updated 2 years ago
- ☆63Updated 2 years ago
- Gradient-Leakage Resilient Federated Learning☆13Updated 2 years ago
- Camouflage poisoning via machine unlearning☆14Updated last year
- Code for identifying natural backdoors in existing image datasets.☆14Updated 2 years ago
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆27Updated 2 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆47Updated 6 months ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆11Updated last year
- Multi-metrics adaptively identifies backdoors in Federated learning☆22Updated 9 months ago