dl970220 / Reversible-Image-Watermarking-Using-Interpolation-TechniqueLinks
可逆水印论文复现
☆10Updated 6 years ago
Alternatives and similar repositories for Reversible-Image-Watermarking-Using-Interpolation-Technique
Users that are interested in Reversible-Image-Watermarking-Using-Interpolation-Technique are comparing it to the libraries listed below
Sorting:
- Webank AI☆42Updated 9 months ago
- edu-yinzhaoxia / Reversible-Data-Hiding-in-Encrypted-Images-Based-on-multi-MSB-Prediction-and-Huffman-CodingThis code is the implementation of the paper "Reversible Data Hiding in Encrypted Images Based on Multi-MSB Prediction and Huffman Coding…☆29Updated 5 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 5 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆62Updated 2 years ago
- [ACM Computing Survey 2025] Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey, by MARS Group at Wuhan Univ…☆22Updated 8 months ago
- A foundational platform that primarily shares federated learning, differential privacy content☆26Updated 8 months ago
- ☆25Updated 3 weeks ago
- Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Ho…☆94Updated 3 years ago
- paper code☆28Updated 5 years ago
- ☆51Updated 4 years ago
- ☆38Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆77Updated 2 years ago
- 使用投毒posion的方式backdoor攻击LeNet-5网络,使用MNIST手写数据集☆14Updated 4 years ago
- ☆25Updated 4 years ago
- ☆19Updated 5 years ago
- The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".☆21Updated last year
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆186Updated 2 months ago
- DiffWA: Diffusion Models for Watermark Attack☆11Updated last year
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆136Updated 4 years ago
- ☆55Updated 2 years ago
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆19Updated last year
- ☆94Updated 4 years ago
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆19Updated 2 years ago
- ☆16Updated 2 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆88Updated 2 years ago
- 使用pytorch实现FGSM☆31Updated 4 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 4 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆65Updated 5 years ago
- ☆46Updated 2 years ago
- ☆35Updated 4 years ago