☆10Mar 20, 2023Updated 2 years ago
Alternatives and similar repositories for mia-diffusion
Users that are interested in mia-diffusion are comparing it to the libraries listed below
Sorting:
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆43Sep 4, 2024Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Aug 29, 2022Updated 3 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- ☆18Oct 7, 2022Updated 3 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆16Mar 8, 2024Updated last year
- ☆24Aug 18, 2023Updated 2 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Jul 12, 2022Updated 3 years ago
- ☆29Mar 3, 2021Updated 5 years ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code (Findings of EACL …☆12Feb 11, 2026Updated 3 weeks ago
- A Watermark-Conditioned Diffusion Model for IP Protection (ECCV 2024)☆35Apr 5, 2025Updated 11 months ago
- ☆12May 6, 2022Updated 3 years ago
- [CCS 2024] "BadMerging: Backdoor Attacks Against Model Merging": official code implementation.☆35Aug 22, 2024Updated last year
- ☆32Sep 2, 2024Updated last year
- ☆17Feb 1, 2025Updated last year
- A project bringing ethics back to AI☆11Aug 7, 2023Updated 2 years ago
- This is the code repo of our Pattern Recognition journal on IPR protection of Image Captioning Models☆11Aug 29, 2023Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- ☆11Oct 30, 2024Updated last year
- [IEEE TIP] Offical implementation for the work "BadCM: Invisible Backdoor Attack against Cross-Modal Learning".☆14Aug 30, 2024Updated last year
- ☆11Dec 9, 2018Updated 7 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago
- Fingerprint large language models☆49Jul 11, 2024Updated last year
- SurFree: a fast surrogate-free black-box attack☆44Jun 27, 2024Updated last year
- HFMF: Hierarchical Fusion Meets Multi-Stream Models for Deepfake Detection☆13Jan 6, 2025Updated last year
- Very concise example of integrated gradients (a method to reveal areas of attention in input images)☆10Jun 17, 2019Updated 6 years ago
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- DeepREAL: A Deep Learning Powered Multi-scale Modeling Framework Towards Predicting Out-of-distribution Receptor Activity of Ligand Bindi…☆11Apr 23, 2022Updated 3 years ago
- ☆15Apr 4, 2024Updated last year
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- [NeurIPS 2024 Spotlight] Official Code of the paper "Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Foreca…☆16Dec 24, 2024Updated last year
- ☆12Jan 25, 2025Updated last year
- Disguising Attacks with Explanation-Aware Backdoors (IEEE S&P 2023)☆11Jan 3, 2026Updated 2 months ago
- ☆10Oct 31, 2022Updated 3 years ago
- ☆14Feb 26, 2025Updated last year