[AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
☆29Dec 31, 2024Updated last year
Alternatives and similar repositories for DFST
Users that are interested in DFST are comparing it to the libraries listed below
Sorting:
- A toolbox for backdoor attacks.☆23Jan 13, 2023Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Apr 5, 2022Updated 3 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Jan 15, 2025Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆21Oct 5, 2025Updated 5 months ago
- ☆12May 27, 2022Updated 3 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆28Apr 15, 2025Updated 11 months ago
- [ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening☆10Dec 18, 2025Updated 3 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17May 7, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders☆36Sep 25, 2023Updated 2 years ago
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆28Feb 17, 2026Updated last month
- Siren: Byzantine-robust Federated Learning via Proactive Alarming (SoCC '21)☆11Mar 28, 2024Updated last year
- ☆18Aug 15, 2022Updated 3 years ago
- ☆22Sep 16, 2022Updated 3 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- ☆20May 6, 2022Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Feb 26, 2026Updated 3 weeks ago
- ☆17Sep 4, 2024Updated last year
- ☆15Dec 29, 2023Updated 2 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- ☆27Nov 9, 2022Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆20Aug 7, 2023Updated 2 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- ☆46Feb 16, 2026Updated last month
- ☆19Mar 26, 2022Updated 3 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆53Jun 2, 2025Updated 9 months ago
- ☆83Aug 3, 2021Updated 4 years ago
- Trojan Attack on Neural Network☆190Mar 25, 2022Updated 3 years ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- Code for NDSS 2022 paper "MIRROR: Model Inversion for Deep Learning Network with High Fidelity"☆27May 9, 2023Updated 2 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- [PyTorch Implementation] Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Feb 27, 2021Updated 5 years ago