SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆11Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆90Updated 8 months ago
- ☆13Updated last year
- ☆104Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆19Updated 4 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆65Updated 2 years ago
- ☆19Updated 11 months ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆18Updated 8 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆23Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 8 months ago
- ☆54Updated 2 years ago
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆52Updated 6 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆59Updated 8 months ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆60Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- ☆12Updated last year
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆26Updated 3 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆39Updated last year
- ☆32Updated last year
- Papers related to federated learning in top conferences (2020-2024).☆69Updated 10 months ago
- Federated Learning in CVPR2024☆18Updated last year
- [USENIX Security 2024] PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretrainin…☆22Updated 9 months ago
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆31Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆96Updated last month
- [NDSS'25] The official implementation of safety misalignment.☆16Updated 7 months ago
- Source code of FedPrompt☆15Updated 3 years ago