SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆14Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆110Updated last year
- ☆117Updated last year
- ☆14Updated last year
- ☆65Updated 2 years ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆52Updated last year
- Source code of FedPrompt☆16Updated 3 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- Federated Learning in CVPR2024☆19Updated last year
- Pytorch implementations of Client-Customized Adaptation for Parameter-Efficient Federated Learning (Findings of ACL: ACL 2023)☆17Updated 2 years ago
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆61Updated 11 months ago
- Latest Advances on Federated LLM Learning☆97Updated 7 months ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆103Updated 6 months ago
- ☆73Updated 3 years ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆13Updated last year
- ☆13Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Updated 9 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆29Updated 8 months ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Updated 5 months ago
- ☆55Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Updated 3 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆37Updated 6 months ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- ☆31Updated 2 years ago
- AAAI 2024 accepted paper, FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heter…☆62Updated last year