zhuhong1996 / AI-GuardianLinks
This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security and Privacy 2023.
☆13Updated last year
Alternatives and similar repositories for AI-Guardian
Users that are interested in AI-Guardian are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- ☆66Updated 4 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆20Updated 4 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆13Updated 4 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆57Updated last year
- ☆82Updated 4 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 5 months ago
- ☆65Updated last year
- ☆44Updated 2 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated 9 months ago
- ☆24Updated 2 years ago
- Craft poisoned data using MetaPoison☆52Updated 4 years ago
- ☆13Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- ☆22Updated 8 months ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- ☆20Updated 2 years ago
- ☆44Updated 6 months ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Updated 2 years ago
- ☆19Updated 4 years ago
- ☆21Updated 6 months ago
- ☆25Updated 2 years ago
- Official implementation of the CVPR 2022 paper "Backdoor Attacks on Self-Supervised Learning".☆74Updated last year
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆18Updated 2 years ago
- ☆53Updated 2 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- ☆25Updated 6 years ago
- ☆6Updated 2 years ago