MarkGHX / BiScopeLinks
Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens
☆27Updated 8 months ago
Alternatives and similar repositories for BiScope
Users that are interested in BiScope are comparing it to the libraries listed below
Sorting:
- ☆20Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆20Updated 2 months ago
- ☆18Updated 3 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- ☆36Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆26Updated 7 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- ☆15Updated last year
- ☆21Updated last year
- This is the official code repository for paper "Exploiting the Adversarial Example Vulnerability of Transfer Learning of Source Code".☆16Updated 2 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Updated 11 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated 11 months ago
- ☆82Updated 3 months ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 10 months ago
- multi-bit language model watermarking (NAACL 24)☆17Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 11 months ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 3 years ago
- ☆37Updated last year
- Fingerprint large language models☆46Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Updated 9 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Updated last year
- ☆15Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆218Updated 3 weeks ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆29Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆50Updated 6 months ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆23Updated 3 months ago
- ☆17Updated last year
- ☆26Updated last year