parameterlab / trapLinks
Source code of "TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification", ACL2024 (findings)
☆13Updated 9 months ago
Alternatives and similar repositories for trap
Users that are interested in trap are comparing it to the libraries listed below
Sorting:
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆91Updated last year
- ☆61Updated last month
- General research for Dreadnode☆25Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆32Updated last year
- DEF CON 31 AI Village - LLMs: Loose Lips Multipliers☆10Updated 2 years ago
- ☆73Updated last year
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 6 months ago
- Code for shelLM tool☆55Updated 7 months ago
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- Adversarial Tokenization☆28Updated 2 weeks ago
- ATLAS tactics, techniques, and case studies data☆78Updated 3 weeks ago
- A benchmark for prompt injection detection systems.☆128Updated last week
- CyberBench: A Multi-Task Cyber LLM Benchmark☆18Updated 4 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Security Weaknesses in Machine Learning☆15Updated 2 years ago
- LLM | Security | Operations in one github repo with good links and pictures.☆52Updated 8 months ago
- Papers about red teaming LLMs and Multimodal models.☆135Updated 3 months ago
- Data Scientists Go To Jupyter☆66Updated 6 months ago
- LLM security and privacy☆51Updated 10 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆167Updated 5 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆275Updated last month
- ☆85Updated 9 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆55Updated 10 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆114Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆76Updated this week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆133Updated 8 months ago
- Code for the paper "EMBERSim: A Large-Scale Databank for Boosting Similarity Search in Malware Analysis"☆31Updated last year
- Central repo for talks and presentations☆46Updated last year
- An Adaptive Misuse Detection System☆44Updated 10 months ago
- Autonomous Assumed Breach Penetration-Testing Active Directory Networks☆21Updated 2 weeks ago