zhuhong1996 / AI-GuardianLinks
This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security and Privacy 2023.
☆14Updated 2 years ago
Alternatives and similar repositories for AI-Guardian
Users that are interested in AI-Guardian are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Updated 2 years ago
- ☆68Updated 5 years ago
- ☆25Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- ☆25Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 11 months ago
- ☆26Updated 3 years ago
- ☆13Updated 4 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Updated 4 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- ☆83Updated 4 years ago
- Repository for Knowledge Enhanced Machine Learning Pipeline (KEMLP)☆10Updated 4 years ago
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 3 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆19Updated 2 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 3 years ago
- ☆43Updated 2 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Updated 2 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Updated 5 years ago
- ☆32Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Updated 4 years ago
- ☆33Updated 3 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Updated 2 years ago
- ☆53Updated 2 years ago
- Hidden backdoor attack on NLP systems☆47Updated 4 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago