etfovac / watermarkLinks
Robustness of DWT vs DCT is graded based on the quality of extracted watermark. The measure used is the Correlation coefficient (0-100%).
☆13Updated last year
Alternatives and similar repositories for watermark
Users that are interested in watermark are comparing it to the libraries listed below
Sorting:
- This project uses a semi-blind watermarking approach to prove ownership of the image.☆11Updated 4 years ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated 2 years ago
- This is the code repo of our CVPR2021 on protecting the IPR of Generative Adversarial Networks (GANs) from Ambiguity Attack☆32Updated last year
- A general approach for using deep neural network for digital watermarking☆15Updated 5 years ago
- Generative Models to hide Audio inside Images using custom loss functions and Spectrogram Analysis☆20Updated 3 years ago
- Code repository for Blackbox Attacks via Surrogate Ensemble Search (BASES), NeurIPS 2022☆11Updated 11 months ago
- Developing adversarial examples and showing their semantic generalization for the OpenAI CLIP model (https://github.com/openai/CLIP)☆26Updated 4 years ago
- IStego100K: Large-scale Image Steganalysis Dataset☆66Updated 4 years ago
- Watermarking Deep Neural Networks (USENIX 2018)☆98Updated 4 years ago
- Official code for DefakeHop: A Light-Weight High-Performance Deepfake Detector☆77Updated 2 years ago
- [Preprint] "Can 3D Adversarial Logos Cloak Humans?"☆18Updated 3 years ago
- Official implementation of "Watermarking Images in Self-Supervised Latent-Spaces"☆112Updated 2 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆24Updated last year
- [ICML 2024 - Foundation Models in the Wild] DistilDIRE: A Small, Fast, Cheap and Lightweight Diffusion Synthesized Deepfake Detection☆27Updated 11 months ago
- ☆25Updated 2 years ago
- Code for the paper titled "Adversarial Vulnerability of Randomized Ensembles" (ICML 2022).☆10Updated 3 years ago
- This dataset contains results from all rounds of Adversarial Nibbler. This data includes adversarial prompts fed into public generative t…☆23Updated 5 months ago
- This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security a…☆13Updated last year
- Adversarial Augmentation Against Adversarial Attacks☆31Updated 2 years ago
- Implementation of 'A Watermark for Large Language Models' paper by Kirchenbauer & Geiping et. al.☆24Updated 2 years ago
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆43Updated 3 years ago
- ☆9Updated 3 years ago
- ☆16Updated 3 years ago
- This is the official implementation of ClusTR: Clustering Training for Robustness paper.☆20Updated 3 years ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 2 weeks ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 4 years ago
- ☆29Updated 2 years ago
- ☆48Updated 4 years ago
- ☆20Updated 2 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆39Updated 3 years ago