Raytsang123 / CLIBE
[NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"
β12Updated 4 months ago
Alternatives and similar repositories for CLIBE:
Users that are interested in CLIBE are comparing it to the libraries listed below
- Distribution Preserving Backdoor Attack in Self-supervised Learningβ15Updated last year
- π₯π₯π₯ Detecting hidden backdoors in Large Language Models with only black-box accessβ19Updated 5 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.β22Updated last year
- β13Updated last year
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokensβ15Updated last month
- β20Updated last year
- β15Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Modelsβ13Updated last year
- A toolbox for backdoor attacks.β21Updated 2 years ago
- β25Updated 6 months ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Modelsβ17Updated 3 months ago
- β80Updated last year
- β20Updated 7 months ago
- β18Updated 2 years ago
- Machine Learning & Security Seminar @Purdue Universityβ25Updated last year
- β14Updated 11 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Modelsβ14Updated last month
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)β33Updated last year
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learningβ¦β17Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defenseβ17Updated 11 months ago
- β12Updated 3 years ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"β42Updated 2 years ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wateβ¦β33Updated 5 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107β17Updated 8 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judgeβ19Updated 5 months ago
- β81Updated 3 years ago
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).β30Updated last year
- Repository for Towards Codable Watermarking for Large Language Modelsβ36Updated last year
- β18Updated last year
- β47Updated 3 months ago