AI-secure / Meta-Nerual-Trojan-DetectionView external linksLinks
☆68Sep 29, 2020Updated 5 years ago
Alternatives and similar repositories for Meta-Nerual-Trojan-Detection
Users that are interested in Meta-Nerual-Trojan-Detection are comparing it to the libraries listed below
Sorting:
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Apr 5, 2022Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆83Jul 20, 2023Updated 2 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Jun 1, 2022Updated 3 years ago
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆44Oct 24, 2023Updated 2 years ago
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆302Aug 25, 2025Updated 5 months ago
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- TrojanLM: Trojaning Language Models for Fun and Profit☆16Jun 17, 2021Updated 4 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 5 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆19Apr 13, 2023Updated 2 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- ☆20May 6, 2022Updated 3 years ago
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆37Jul 22, 2024Updated last year
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆21Jun 3, 2020Updated 5 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Nov 12, 2024Updated last year
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆378Feb 5, 2023Updated 3 years ago
- A Implementation of ICCV-2021(Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection)☆28Aug 27, 2021Updated 4 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Mar 24, 2023Updated 2 years ago
- ☆151Oct 9, 2024Updated last year
- ☆26Dec 1, 2022Updated 3 years ago
- ☆23Aug 24, 2020Updated 5 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Oct 5, 2022Updated 3 years ago
- [ICLR 2021: Spotlight] Source code for the paper "A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Infer…☆15Feb 16, 2022Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- ☆25Nov 12, 2022Updated 3 years ago
- The open-sourced Python toolbox for backdoor attacks and defenses.☆641Sep 27, 2025Updated 4 months ago
- ☆581Jul 4, 2025Updated 7 months ago
- This repository is the implementation of Deep Dirichlet Process Mixture Models (UAI 2022)☆15May 19, 2022Updated 3 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Dec 20, 2024Updated last year
- Hidden backdoor attack on NLP systems☆47Nov 14, 2021Updated 4 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆27May 1, 2024Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- A list of backdoor learning resources☆1,158Jul 31, 2024Updated last year