Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆33Nov 4, 2020Updated 5 years ago
Alternatives and similar repositories for prediction-poisoning
Users that are interested in prediction-poisoning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆115Dec 8, 2022Updated 3 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Mar 20, 2019Updated 7 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆77Apr 1, 2024Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆34Mar 28, 2022Updated 4 years ago
- Copycat CNN☆28Apr 17, 2024Updated 2 years ago
- The code of paper: Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing (CVPR 2024))☆19Mar 12, 2024Updated 2 years ago
- A novel data-free model stealing method based on GAN☆134Oct 11, 2022Updated 3 years ago
- ☆11Dec 23, 2024Updated last year
- Neural Networks exam project. Machine learning algorithm: implementation of FGSM and JSMA attacks by Goodfellow and Papernot.☆16Jan 13, 2026Updated 3 months ago
- Black-Box Ripper: Copying black-box models using generative evolutionary algorithms - NIPS 2020 - Official Implementation☆29Oct 25, 2020Updated 5 years ago
- ☆12Sep 14, 2023Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- This is the official implementation of NNSplitter (ICML'23)☆12Jun 11, 2024Updated last year
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆356Nov 22, 2020Updated 5 years ago
- Official implementation of the USENIX Security 2024 paper ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks.☆25Dec 6, 2023Updated 2 years ago
- This project explores training data extraction attacks on the LLaMa 7B, GPT-2XL, and GPT-2-IMDB models to discover memorized content usin…☆15Jun 15, 2023Updated 2 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- ☆13Jul 11, 2019Updated 6 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆82May 5, 2022Updated 4 years ago
- ☆16Nov 30, 2022Updated 3 years ago
- Code for Active Mixup in 2020 CVPR☆23Jan 11, 2022Updated 4 years ago
- ☆22Oct 5, 2023Updated 2 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles☆55Feb 25, 2022Updated 4 years ago
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 7 months ago
- Model Extraction(Stealing) Attacks and Defenses on Machine Learning Models Literature☆30Sep 25, 2024Updated last year
- ☆47Mar 29, 2022Updated 4 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 6 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆56Mar 16, 2021Updated 5 years ago
- General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems☆22May 10, 2020Updated 5 years ago
- [ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models☆18Sep 25, 2025Updated 7 months ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- ☆19Mar 26, 2022Updated 4 years ago