EQuiw / 2019-scalingattack
Image-Scaling Attacks and Defenses
☆180Updated 2 years ago
Alternatives and similar repositories for 2019-scalingattack:
Users that are interested in 2019-scalingattack are comparing it to the libraries listed below
- Testing the effectiveness of practical implementations of adversarial examples against facial recognition.☆137Updated last month
- Steps towards physical adversarial attacks on facial recognition☆80Updated last year
- Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"☆18Updated 3 weeks ago
- 🔥🔥Defending Against Deepfakes Using Adversarial Attacks on Conditional Image Translation Networks☆328Updated 4 years ago
- 🏞 Steganography-based image integrity - Merkle tree nodes embedded into image chunks so that each chunk's integrity can be verified on i…☆105Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated last year
- ☆122Updated 3 years ago
- ☆85Updated 4 years ago
- Code for attacking state-of-the-art face-recognition system from our paper: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "Accessorize …☆59Updated 6 years ago
- ☆21Updated 3 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- ☆84Updated last year
- Implementation of AGNs, proposed in: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "A General Framework for Adversarial Examples with O…☆37Updated 4 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆33Updated 4 years ago
- Break neural networks in your browser 🦹♂️☆149Updated 2 years ago
- building the next-gen watermark with deep learning.☆185Updated 3 years ago
- ☆40Updated last year
- Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"☆71Updated 4 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆186Updated 4 years ago
- white box adversarial attack☆38Updated 4 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆66Updated 7 years ago
- A unified benchmark problem for data poisoning attacks☆153Updated last year
- PyTorch implementation of Adversarial Patch☆13Updated last year
- ☆33Updated 4 years ago
- Black-Box Adversarial Attack on Public Face Recognition Systems☆409Updated 3 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆101Updated 7 months ago
- ☆246Updated 6 years ago
- A novel data-free model stealing method based on GAN☆127Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆162Updated 8 months ago
- Preimage attack against NeuralHash 💣☆669Updated 2 years ago