jianshuod / TBALinks
Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''
☆20Updated 2 years ago
Alternatives and similar repositories for TBA
Users that are interested in TBA are comparing it to the libraries listed below
Sorting:
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Updated 4 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 2 years ago
- ☆19Updated 2 years ago
- Respect to the input tensor instead of paramters of NN☆21Updated 3 years ago
- ☆33Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- ☆25Updated 3 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆20Updated 3 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Updated last year
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Updated 3 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Updated 2 years ago
- ☆25Updated 3 years ago
- Data-Efficient Backdoor Attacks☆20Updated 3 years ago
- ☆26Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- This is the official repository for our NeurIPS'22 paper "Watermarking for Out-of-distribution Detection."☆18Updated 2 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- ☆14Updated 3 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆16Updated 3 months ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- SEAT☆21Updated 2 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Updated 3 years ago
- ☆12Updated 3 years ago
- ☆24Updated last year
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 2 years ago
- ☆14Updated 10 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated 11 months ago