SoftWiser-group / FTrojanLinks
Implementation of An Invisible Black-box Backdoor Attack through Frequency Domain
☆19Updated 3 years ago
Alternatives and similar repositories for FTrojan
Users that are interested in FTrojan are comparing it to the libraries listed below
Sorting:
- ☆15Updated 2 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆111Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆34Updated 2 years ago
- ☆16Updated 3 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- ☆18Updated 3 years ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆20Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Updated 7 months ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated last month
- ☆19Updated 3 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆18Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- A curated list of papers for the transferability of adversarial examples☆73Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆27Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆57Updated last year
- Official code for "Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge"☆12Updated last year
- ☆22Updated 5 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆35Updated 6 months ago
- ☆25Updated 2 years ago
- Removing Adversarial Noise in Class Activation Feature Space☆14Updated 2 years ago
- A Unified Approach to Interpreting and Boosting Adversarial Transferability (ICLR2021)☆31Updated 3 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆100Updated 3 years ago