SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆14Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆105Updated last year
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆59Updated 10 months ago
- ☆65Updated 2 years ago
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆102Updated 5 months ago
- ☆13Updated last year
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- ☆116Updated last year
- Pytorch implementations of Client-Customized Adaptation for Parameter-Efficient Federated Learning (Findings of ACL: ACL 2023)☆17Updated 2 years ago
- Latest Advances on Federated LLM Learning☆90Updated 6 months ago
- ☆55Updated 2 years ago
- [USENIX Security 2024] PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretrainin…☆23Updated last year
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆27Updated 7 months ago
- Source code of FedPrompt☆16Updated 3 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆31Updated 2 weeks ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- ☆13Updated last year
- ☆31Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- ☆25Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Updated 8 months ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆51Updated last year
- ☆72Updated 3 years ago
- ☆27Updated 2 years ago
- Federated Learning - PyTorch☆15Updated 4 years ago
- Papers related to federated learning in top conferences (2020-2024).☆69Updated last year
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆13Updated last year