SecurityNet-Research / SecurityNetLinks
☆14Updated last year
Alternatives and similar repositories for SecurityNet
Users that are interested in SecurityNet are comparing it to the libraries listed below
Sorting:
- Code for ML Doctor☆92Updated last year
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- ☆17Updated last year
- ☆68Updated 5 years ago
- ☆27Updated 4 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆52Updated 8 months ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Updated 2 years ago
- ☆101Updated 5 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆114Updated 3 years ago
- ☆83Updated 4 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- Papers I have collected and read in undergraduate and graduate period☆54Updated 2 years ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆26Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆127Updated 4 years ago
- ☆25Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 11 months ago
- ☆25Updated 3 years ago
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆34Updated 2 years ago
- A curated list of academic events on AI Security & Privacy☆167Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆151Updated last year
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆102Updated last year
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19Updated 3 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 7 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆193Updated 3 years ago
- ☆37Updated last year
- Code for AAAI 2021 "Towards Feature Space Adversarial Attack".☆30Updated 4 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Updated 6 years ago