tribhuvanesh / prediction-poisoningView external linksLinks
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆33Nov 4, 2020Updated 5 years ago
Alternatives and similar repositories for prediction-poisoning
Users that are interested in prediction-poisoning are comparing it to the libraries listed below
Sorting:
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆114Dec 8, 2022Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Apr 1, 2024Updated last year
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Mar 20, 2019Updated 6 years ago
- Copycat CNN☆28Apr 17, 2024Updated last year
- ☆34Mar 28, 2022Updated 3 years ago
- The code of paper: Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing (CVPR 2024))☆19Mar 12, 2024Updated last year
- A novel data-free model stealing method based on GAN☆133Oct 11, 2022Updated 3 years ago
- Neural Networks exam project. Machine learning algorithm: implementation of FGSM and JSMA attacks by Goodfellow and Papernot.☆16Jan 13, 2026Updated last month
- Black-Box Ripper: Copying black-box models using generative evolutionary algorithms - NIPS 2020 - Official Implementation☆29Oct 25, 2020Updated 5 years ago
- ☆11Dec 23, 2024Updated last year
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆34Jul 15, 2021Updated 4 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 3 years ago
- The project page of paper: Aha! Adaptive History-driven Attack for Decision-based Black-box Models [ICCV 2021]☆10Feb 23, 2022Updated 3 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago
- This project explores training data extraction attacks on the LLaMa 7B, GPT-2XL, and GPT-2-IMDB models to discover memorized content usin…☆15Jun 15, 2023Updated 2 years ago
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 4 months ago
- This is the official implementation of NNSplitter (ICML'23)☆12Jun 11, 2024Updated last year
- Official implementation of the USENIX Security 2024 paper ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.☆21Dec 6, 2023Updated 2 years ago
- ☆17Nov 30, 2022Updated 3 years ago
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆355Nov 22, 2020Updated 5 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- Code for FAB-attack☆34Jul 10, 2020Updated 5 years ago
- ☆23Sep 15, 2024Updated last year
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Jul 20, 2021Updated 4 years ago
- Model Extraction(Stealing) Attacks and Defenses on Machine Learning Models Literature☆29Sep 25, 2024Updated last year
- Pytorch code for ens_adv_train☆17Jun 7, 2019Updated 6 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- Source code for Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses (ECCV 2020)☆42Apr 2, 2019Updated 6 years ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆50Dec 20, 2023Updated 2 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆48Dec 17, 2019Updated 6 years ago
- ☆42Dec 8, 2022Updated 3 years ago
- Generalized Data-free Universal Adversarial Perturbations in PyTorch☆20Oct 9, 2020Updated 5 years ago