SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆14Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆105Updated last year
- ☆13Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Updated 8 months ago
- Federated Learning in CVPR2024☆19Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- ☆31Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- ☆65Updated 2 years ago
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆31Updated 2 weeks ago
- Source code of FedPrompt☆16Updated 3 years ago
- ☆116Updated last year
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆102Updated 5 months ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆27Updated 7 months ago
- ☆55Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- ☆13Updated last year
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆59Updated 10 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- Pytorch implementations of Client-Customized Adaptation for Parameter-Efficient Federated Learning (Findings of ACL: ACL 2023)☆17Updated 2 years ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆51Updated last year
- AAAI 2024 accepted paper, FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heter…☆60Updated last year
- Federated Learning - PyTorch☆15Updated 4 years ago
- ☆30Updated 2 years ago
- PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models — Federated Learning in Age of Foundation Model☆43Updated 2 years ago
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆14Updated last year