SecurityNet-Research / SecurityNetLinks
☆14Updated last year
Alternatives and similar repositories for SecurityNet
Users that are interested in SecurityNet are comparing it to the libraries listed below
Sorting:
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆114Updated 3 years ago
- Code for ML Doctor☆92Updated last year
- ☆27Updated 4 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 7 months ago
- ☆37Updated last year
- ☆83Updated 4 years ago
- ☆68Updated 5 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆52Updated 8 months ago
- ☆17Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Updated 2 years ago
- ☆19Updated 4 years ago
- ☆101Updated 5 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆76Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Updated last year
- ☆26Updated 3 years ago
- Implementations and demo of a regular Backdoor and a Latent backdoor attack on Deep Neural Networks.☆19Updated 3 years ago
- ☆19Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆26Updated last year
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆193Updated 3 years ago
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆34Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Papers I have collected and read in undergraduate and graduate period☆54Updated 2 years ago
- A curated list of academic events on AI Security & Privacy☆167Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Updated 4 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Updated 6 years ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆61Updated 2 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- ☆78Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 11 months ago