theothings / facescrub-datasetLinks
Dataset of 475000 faces from 530 faces (50x50 color) from facescrub
☆18Updated 5 years ago
Alternatives and similar repositories for facescrub-dataset
Users that are interested in facescrub-dataset are comparing it to the libraries listed below
Sorting:
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 3 years ago
- ☆44Updated last year
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆35Updated 10 months ago
- ☆31Updated 4 years ago
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆24Updated last month
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆47Updated 5 years ago
- ☆26Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆31Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆44Updated 3 years ago
- ☆47Updated 4 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆81Updated last year
- [NeurIPS 2019] This is the code repo of our novel passport-based DNN ownership verification schemes, i.e. we embed passport layer into va…☆81Updated last year
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆48Updated 3 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- ☆27Updated 2 years ago
- ☆18Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆177Updated last week
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆29Updated 7 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- ☆41Updated 3 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆13Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆124Updated 6 months ago
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆40Updated last year
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆115Updated 2 years ago
- ☆19Updated 2 years ago
- A simple implementation of BadNets on MNIST☆33Updated 5 years ago