usnistgov / trojai-exampleLinks
Example TrojAI Submission
☆27Updated last year
Alternatives and similar repositories for trojai-example
Users that are interested in trojai-example are comparing it to the libraries listed below
Sorting:
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆84Updated 2 years ago
- ☆13Updated 4 years ago
- ☆68Updated 5 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- ☆151Updated last year
- ☆23Updated 5 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated last year
- A toolbox for backdoor attacks.☆23Updated 3 years ago
- ☆69Updated last year
- ☆83Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆37Updated last year
- ☆27Updated 3 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 3 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆133Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year
- Source code for the Energy-Latency Attacks via Sponge Poisoning paper.☆15Updated 3 years ago
- [ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening☆10Updated last month
- ☆50Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Updated last year
- ☆25Updated 3 years ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- ☆25Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated last year
- ☆27Updated 3 years ago
- ☆18Updated 4 years ago
- ☆18Updated 3 years ago