Eyr3 / TextCRSLinks
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)
☆34Updated last month
Alternatives and similar repositories for TextCRS
Users that are interested in TextCRS are comparing it to the libraries listed below
Sorting:
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆37Updated 6 months ago
- ☆23Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 9 months ago
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- ☆82Updated 3 years ago
- ☆27Updated 2 years ago
- [IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆25Updated last month
- ☆18Updated 2 years ago
- ☆19Updated 2 years ago
- ☆15Updated 2 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 5 months ago
- ☆23Updated 11 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆23Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 5 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 3 years ago
- ☆31Updated 3 years ago
- ☆20Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆17Updated 5 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆16Updated last year
- ☆12Updated 3 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆13Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 3 months ago
- ☆38Updated 3 years ago
- ☆27Updated last month
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- ☆21Updated last year