A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
☆108Apr 15, 2024Updated last year
Alternatives and similar repositories for BIPIA
Users that are interested in BIPIA are comparing it to the libraries listed below
Sorting:
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆63Dec 4, 2025Updated 3 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆89Jul 24, 2025Updated 7 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆409Oct 29, 2025Updated 4 months ago
- A collection of prompt injection mitigation techniques.☆28Aug 19, 2023Updated 2 years ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆37May 31, 2024Updated last year
- Utilities for Python developing and debugging.☆25Dec 1, 2021Updated 4 years ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆56Nov 13, 2023Updated 2 years ago
- ☆23Oct 25, 2024Updated last year
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 5 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆65Nov 10, 2025Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆659Feb 22, 2025Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆321May 13, 2025Updated 10 months ago
- 针对大模型的后门攻击☆12Jun 30, 2024Updated last year
- An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)☆113Jan 21, 2025Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 6 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆488Mar 12, 2026Updated last week
- Code used to run the platform for the LLM CTF colocated with SaTML 2024☆28Mar 20, 2024Updated 2 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- The automated prompt injection framework for LLM-integrated applications.☆258Sep 12, 2024Updated last year
- A curated collection of papers and related projects on using LLMs for privacy.☆25Oct 8, 2025Updated 5 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- ☆121Jul 2, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆84Sep 1, 2025Updated 6 months ago
- Implementations of 3 phishing detection and identification baselines☆21Nov 25, 2024Updated last year
- An unofficial implementation of AutoDAN attack on LLMs (arXiv:2310.15140)☆45Feb 8, 2024Updated 2 years ago
- New ways of breaking app-integrated LLMs☆2,063Jul 17, 2025Updated 8 months ago
- The python implementation of our "UA-FedRec: Untargeted Attack on Federated News Recommendation" in KDD 2023.☆19Aug 2, 2022Updated 3 years ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆45Apr 21, 2024Updated last year
- [ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms☆37Jun 4, 2025Updated 9 months ago
- ☆98Oct 15, 2023Updated 2 years ago
- 华中科技大学网络安全课程设计-Linux下的状态检测防火墙☆11Oct 17, 2022Updated 3 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆222Dec 10, 2024Updated last year
- ☆11Jan 3, 2024Updated 2 years ago
- ☆20Jan 6, 2025Updated last year