SecurityNet-Research / SecurityNetLinks
☆13Updated last year
Alternatives and similar repositories for SecurityNet
Users that are interested in SecurityNet are comparing it to the libraries listed below
Sorting:
- Code for ML Doctor☆91Updated last year
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆106Updated 2 years ago
- ☆66Updated 4 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆27Updated 2 years ago
- ☆82Updated 4 years ago
- ☆99Updated 4 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆43Updated 7 months ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- ☆25Updated 3 years ago
- ☆16Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆122Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆57Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆44Updated 2 years ago
- ☆18Updated 4 years ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆288Updated 9 months ago
- ☆35Updated 11 months ago
- Papers I have collected and read in undergraduate and graduate period☆52Updated last year
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆31Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆93Updated last year
- ☆24Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 6 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆41Updated 3 months ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆58Updated 9 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆299Updated last week
- ☆66Updated last year
- ☆147Updated 10 months ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆181Updated 2 years ago