EQuiw / 2019-scalingattack
Image-Scaling Attacks and Defenses
β181Updated 2 years ago
Alternatives and similar repositories for 2019-scalingattack:
Users that are interested in 2019-scalingattack are comparing it to the libraries listed below
- building the next-gen watermark with deep learning.β185Updated 3 years ago
- β21Updated 3 years ago
- π Steganography-based image integrity - Merkle tree nodes embedded into image chunks so that each chunk's integrity can be verified on iβ¦β105Updated 3 years ago
- Implementation of AGNs, proposed in: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "A General Framework for Adversarial Examples with Oβ¦β37Updated 4 years ago
- Codes for reproducing query-efficient black-box attacks in βAutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Bβ¦β57Updated 5 years ago
- β40Updated last year
- β33Updated 4 years ago
- Privacy Testing for Deep Learningβ204Updated last year
- A repository to quickly generate synthetic data and associated trojaned deep learning modelsβ77Updated last year
- Copycat CNNβ27Updated last year
- Benchmarking and Visualization Tool for Adversarial Machine Learningβ187Updated 2 years ago
- Countering Adversarial Image using Input Transformations.β496Updated 3 years ago
- My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-β¦β73Updated 7 years ago
- β37Updated 4 years ago
- Implementation of the Biased Boundary Attack for ImageNetβ23Updated 5 years ago
- Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"β18Updated last month
- Preimage attack against NeuralHash π£β669Updated 2 years ago
- The official implementation of CVPR 2021 paper "Simulating Unknown Target Models for Query-Efficient Black-box Attacks"β57Updated 3 years ago
- β85Updated 4 years ago
- Detecting Adversarial Examples in Deep Neural Networksβ67Updated 7 years ago
- Watermarking Deep Neural Networks (USENIX 2018)β97Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"β56Updated 5 months ago
- Testing the effectiveness of practical implementations of adversarial examples against facial recognition.β137Updated 2 months ago
- Code for attacking state-of-the-art face-recognition system from our paper: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "Accessorize β¦β59Updated 6 years ago
- SurFree: a fast surrogate-free black-box attackβ43Updated 10 months ago
- PyTorch implementation of adversarial patchβ211Updated 3 years ago
- β48Updated 4 years ago
- Trojan Attack on Neural Networkβ183Updated 3 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matchingβ102Updated 8 months ago
- β17Updated 2 years ago