zhuhong1996 / AI-Guardian
This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security and Privacy 2023.
☆13Updated last year
Alternatives and similar repositories for AI-Guardian:
Users that are interested in AI-Guardian are comparing it to the libraries listed below
- ☆20Updated 3 months ago
- ☆64Updated 4 years ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆33Updated last week
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆12Updated 4 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 9 months ago
- ☆25Updated 2 years ago
- ☆19Updated last year
- ☆42Updated last year
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 4 years ago
- ☆30Updated 2 years ago
- ☆25Updated 6 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆12Updated 2 years ago
- ☆13Updated 2 years ago
- ☆17Updated last year
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆31Updated 5 months ago
- ☆53Updated last year
- Updated 10 months ago
- ☆12Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆55Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated 11 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆45Updated 6 months ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- ☆24Updated 2 years ago
- ☆25Updated 2 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆52Updated last year
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆20Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 2 months ago