Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆33Nov 4, 2020Updated 5 years ago
Alternatives and similar repositories for prediction-poisoning
Users that are interested in prediction-poisoning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Mar 20, 2019Updated 7 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆76Apr 1, 2024Updated last year
- ☆34Mar 28, 2022Updated 4 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Copycat CNN☆28Apr 17, 2024Updated last year
- The code of paper: Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing (CVPR 2024))☆19Mar 12, 2024Updated 2 years ago
- A novel data-free model stealing method based on GAN☆133Oct 11, 2022Updated 3 years ago
- ☆11Dec 23, 2024Updated last year
- Neural Networks exam project. Machine learning algorithm: implementation of FGSM and JSMA attacks by Goodfellow and Papernot.☆16Jan 13, 2026Updated 2 months ago
- Black-Box Ripper: Copying black-box models using generative evolutionary algorithms - NIPS 2020 - Official Implementation☆29Oct 25, 2020Updated 5 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆34Jul 15, 2021Updated 4 years ago
- ☆12Sep 14, 2023Updated 2 years ago
- This is the official implementation of NNSplitter (ICML'23)☆12Jun 11, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆356Nov 22, 2020Updated 5 years ago
- Official implementation of the USENIX Security 2024 paper ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.☆23Dec 6, 2023Updated 2 years ago
- This project explores training data extraction attacks on the LLaMa 7B, GPT-2XL, and GPT-2-IMDB models to discover memorized content usin…☆15Jun 15, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- ☆13Jul 11, 2019Updated 6 years ago
- The project page of paper: Aha! Adaptive History-driven Attack for Decision-based Black-box Models [ICCV 2021]☆10Feb 23, 2022Updated 4 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆83May 5, 2022Updated 3 years ago
- ☆16Nov 30, 2022Updated 3 years ago
- ☆21May 19, 2025Updated 10 months ago
- Code for Active Mixup in 2020 CVPR☆23Jan 11, 2022Updated 4 years ago
- ☆23Oct 5, 2023Updated 2 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles☆55Feb 25, 2022Updated 4 years ago
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 6 months ago
- Model Extraction(Stealing) Attacks and Defenses on Machine Learning Models Literature☆30Sep 25, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆47Mar 29, 2022Updated 4 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆54Mar 16, 2021Updated 5 years ago
- General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems☆21May 10, 2020Updated 5 years ago
- [ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models☆18Sep 25, 2025Updated 6 months ago