jianshuod / TBALinks
Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''
☆20Updated 2 years ago
Alternatives and similar repositories for TBA
Users that are interested in TBA are comparing it to the libraries listed below
Sorting:
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Updated 4 years ago
- ☆33Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 3 years ago
- ☆20Updated 2 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- Respect to the input tensor instead of paramters of NN☆21Updated 3 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Updated 3 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- Data-Efficient Backdoor Attacks☆20Updated 3 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Updated last year
- ☆14Updated 11 months ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Updated 3 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 3 years ago
- ☆12Updated 3 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆18Updated 4 months ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 3 years ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆75Updated 3 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Updated 3 years ago
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 3 years ago
- ☆25Updated 3 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- SEAT☆21Updated 2 years ago
- ☆15Updated last year
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15Updated last year
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago