Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆33Nov 4, 2020Updated 5 years ago
Alternatives and similar repositories for prediction-poisoning
Users that are interested in prediction-poisoning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆115Dec 8, 2022Updated 3 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆34Mar 20, 2019Updated 7 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆76Apr 1, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆34Mar 28, 2022Updated 4 years ago
- Copycat CNN☆28Apr 17, 2024Updated 2 years ago
- The code of paper: Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing (CVPR 2024))☆19Mar 12, 2024Updated 2 years ago
- ☆11Dec 23, 2024Updated last year
- Neural Networks exam project. Machine learning algorithm: implementation of FGSM and JSMA attacks by Goodfellow and Papernot.☆16Jan 13, 2026Updated 3 months ago
- Black-Box Ripper: Copying black-box models using generative evolutionary algorithms - NIPS 2020 - Official Implementation☆29Oct 25, 2020Updated 5 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆34Jul 15, 2021Updated 4 years ago
- ☆12Sep 14, 2023Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- This is the official implementation of NNSplitter (ICML'23)☆12Jun 11, 2024Updated last year
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆356Nov 22, 2020Updated 5 years ago
- Official implementation of the USENIX Security 2024 paper ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.☆23Dec 6, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- ☆13Jul 11, 2019Updated 6 years ago
- The project page of paper: Aha! Adaptive History-driven Attack for Decision-based Black-box Models [ICCV 2021]☆10Feb 23, 2022Updated 4 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆82May 5, 2022Updated 3 years ago
- ☆16Nov 30, 2022Updated 3 years ago
- ☆21May 19, 2025Updated 11 months ago
- ☆22Oct 5, 2023Updated 2 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles☆55Feb 25, 2022Updated 4 years ago
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 6 months ago
- ☆47Mar 29, 2022Updated 4 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆55Mar 16, 2021Updated 5 years ago
- General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems☆21May 10, 2020Updated 5 years ago
- [ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models☆18Sep 25, 2025Updated 6 months ago
- Code/Models for Defending Against Universal Attacks Through Selective Feature Regeneration, CVPR 2020☆10Jul 31, 2020Updated 5 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago