SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆13Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆98Updated 10 months ago
- ☆13Updated last year
- ☆65Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Source code of FedPrompt☆16Updated 3 years ago
- ☆110Updated last year
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆100Updated 3 months ago
- ☆13Updated last year
- Federated Learning in CVPR2024☆19Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆20Updated 6 months ago
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆55Updated 8 months ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆19Updated 10 months ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆31Updated 2 years ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆103Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Papers related to federated learning in top conferences (2020-2024).☆70Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 3 years ago
- ☆55Updated 2 years ago
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆11Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆61Updated 10 months ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆42Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆61Updated 3 years ago
- Latest Advances on Federated LLM Learning☆74Updated 4 months ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆44Updated 10 months ago
- [USENIX Security 2024] PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretrainin…☆23Updated 11 months ago
- ☆31Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆17Updated 10 months ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆57Updated 9 months ago