jiawangbai / TA-LBF
The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
☆18Updated 3 years ago
Related projects: ⓘ
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- ☆19Updated 2 years ago
- SEAT☆19Updated 11 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 11 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆40Updated 5 months ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆14Updated last year
- Data-Efficient Backdoor Attacks☆18Updated 2 years ago
- Pytorch implementation of NPAttack☆11Updated 4 years ago
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Updated 3 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆30Updated last year
- ☆18Updated 3 years ago
- ☆23Updated last year
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆35Updated 2 years ago
- ☆22Updated last year
- ☆27Updated 2 years ago
- The implementatin of our ECCV 2020 work: Targeted Attack for Deep Hashing based Retrieval.☆28Updated 3 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆24Updated last year
- ☆27Updated 2 years ago
- Official repository for CVPR 2022 paper 'Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution'☆24Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated last year
- Code for identifying natural backdoors in existing image datasets.☆14Updated 2 years ago
- This is the official code for "Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better"☆37Updated 3 years ago
- ☆21Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18Updated last year
- ☆25Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated last year
- ☆12Updated 2 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆12Updated last year
- ☆55Updated 2 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆13Updated last year