MarkGHX / BiScopeLinks
Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens
☆23Updated 5 months ago
Alternatives and similar repositories for BiScope
Users that are interested in BiScope are comparing it to the libraries listed below
Sorting:
- ☆20Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆16Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆17Updated 7 months ago
- ☆18Updated 3 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- ☆33Updated 10 months ago
- ☆14Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆39Updated 2 months ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆16Updated last week
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆27Updated 4 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆61Updated 8 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated last year
- ☆25Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 7 months ago
- ☆12Updated 2 weeks ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- This is the official code repository for paper "Exploiting the Adversarial Example Vulnerability of Transfer Learning of Source Code".☆12Updated 3 months ago
- ☆35Updated 10 months ago
- ☆15Updated last year
- ☆30Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆43Updated 7 months ago
- Composite Backdoor Attacks Against Large Language Models☆16Updated last year
- ☆82Updated last year
- multi-bit language model watermarking (NAACL 24)☆14Updated 11 months ago
- ☆21Updated 2 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 7 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆26Updated 11 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆200Updated 2 weeks ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 2 years ago