zlh-thu / DPFLView external linksLinks
A Fine-grained Differentially Private Federated Learning against Leakage from Gradients
☆15Jan 18, 2023Updated 3 years ago
Alternatives and similar repositories for DPFL
Users that are interested in DPFL are comparing it to the libraries listed below
Sorting:
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Jul 20, 2021Updated 4 years ago
- Decentralized Identity☆12Dec 8, 2021Updated 4 years ago
- ☆23Dec 15, 2022Updated 3 years ago
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Feb 14, 2023Updated 3 years ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- This repository is the implementation of Deep Dirichlet Process Mixture Models (UAI 2022)☆15May 19, 2022Updated 3 years ago
- Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)☆12Aug 25, 2020Updated 5 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Sep 1, 2023Updated 2 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆19Apr 13, 2023Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- A Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Aug 27, 2021Updated 4 years ago
- Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shak…☆421Oct 26, 2024Updated last year
- ☆25Mar 24, 2023Updated 2 years ago
- ☆27Nov 9, 2022Updated 3 years ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Curated notebooks on how to train neural networks using differential privacy and federated learning.☆67Jan 15, 2021Updated 5 years ago
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆37Jun 4, 2024Updated last year
- nips23-Dynamic Personalized Federated Learning with Adaptive Differential Privacy☆92Sep 10, 2024Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- Implementation of dp-based federated learning framework using PyTorch☆315Jan 3, 2026Updated last month
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Apr 23, 2025Updated 9 months ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- ☆10May 18, 2024Updated last year
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Jul 15, 2024Updated last year
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆11Jul 7, 2022Updated 3 years ago
- ☆12Oct 28, 2023Updated 2 years ago
- 双向身份认证系统☆10Nov 25, 2021Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 3 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆474Apr 17, 2022Updated 3 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago