SamuelGong / grad_attacksLinks
Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.
☆14Updated last year
Alternatives and similar repositories for grad_attacks
Users that are interested in grad_attacks are comparing it to the libraries listed below
Sorting:
- ☆107Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- ☆65Updated 2 years ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆14Updated last year
- ☆14Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆51Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- [NDSS'25] The official implementation of safety misalignment.☆17Updated last year
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆64Updated last year
- ☆116Updated last year
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Updated 9 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆60Updated 10 months ago
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆31Updated last month
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- ☆32Updated 3 years ago
- ☆13Updated last year
- Pytorch implementations of Client-Customized Adaptation for Parameter-Efficient Federated Learning (Findings of ACL: ACL 2023)☆17Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- ☆55Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆42Updated last year
- ☆25Updated last year
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆13Updated last year
- ☆30Updated last year
- ☆28Updated last year
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆29Updated 8 months ago
- ☆27Updated 3 years ago