casperllm / CASPER
☆14Updated last year
Alternatives and similar repositories for CASPER:
Users that are interested in CASPER are comparing it to the libraries listed below
- ☆14Updated last year
- ☆26Updated 6 months ago
- ☆79Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 2 years ago
- ☆15Updated 2 years ago
- ☆18Updated 10 months ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated last year
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆30Updated 3 months ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆21Updated 7 months ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆18Updated 9 months ago
- A toolbox for backdoor attacks.☆21Updated 2 years ago
- Repository for Towards Codable Watermarking for Large Language Models☆36Updated last year
- ☆51Updated 4 months ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆23Updated 9 months ago
- ☆17Updated 2 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆15Updated last month
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆16Updated last month
- ☆20Updated last year
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆12Updated 4 months ago
- ☆18Updated 7 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆51Updated 6 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆22Updated 9 months ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆18Updated last month
- ☆20Updated last year
- Submission Guide + Discussion Board for AI Singapore Global Challenge for Safe and Secure LLMs (Track 2A).☆10Updated 3 months ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆17Updated 4 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 4 months ago
- Red Queen Dataset and data generation template☆15Updated 6 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆19Updated 9 months ago
- ☆31Updated 7 months ago