D1aoBoomm / GI-PIPLinks
GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks? (ICASSP 2024)
☆16Updated 8 months ago
Alternatives and similar repositories for GI-PIP
Users that are interested in GI-PIP are comparing it to the libraries listed below
Sorting:
- TSQP: Safeguarding Real-Time Inference for Quantization Neural Networks on Edge Devices (Accepted to S&P 2025)☆17Updated 3 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆231Updated last year
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆193Updated 3 years ago
- Code repo for the UAI 2023 paper "Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning".☆15Updated last year
- ☆572Updated 5 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆172Updated 7 months ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆188Updated 3 months ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆317Updated last month
- ☆367Updated last month
- DPSUR☆27Updated 11 months ago
- ☆35Updated last year
- ☆46Updated 2 years ago
- ☆44Updated 8 months ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆39Updated 3 months ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Updated 2 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆376Updated 2 years ago
- Implementation of "Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes" (https://…☆13Updated last year
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆50Updated last year
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆211Updated 6 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆284Updated 11 months ago
- Paper notes and code for differentially private machine learning☆372Updated 3 months ago
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Updated 8 months ago
- Releasing the source code Version1.☆176Updated 4 years ago
- "Efficient Federated Learning for Modern NLP", to appear at MobiCom 2023.☆34Updated 2 years ago
- reveal the vulnerabilities of SplitNN☆31Updated 3 years ago
- Privacy attacks on Split Learning☆42Updated 4 years ago
- 联邦学习隐私保护综述☆36Updated 4 years ago
- ☆178Updated last year
- Open-source code and data for ShadowNet(S&P Oakland'23)☆11Updated last year