A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
☆117Apr 15, 2024Updated last year
Alternatives and similar repositories for BIPIA
Users that are interested in BIPIA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆94Jul 24, 2025Updated 8 months ago
- [ACL 2025] The official implementation of the paper "PIGuard: Prompt Injection Guardrail via Mitigating Overdefense for Free".☆68Dec 4, 2025Updated 4 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆422Oct 29, 2025Updated 5 months ago
- A benchmark for prompt injection detection systems.☆177Apr 2, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆58Nov 13, 2023Updated 2 years ago
- ☆23Oct 25, 2024Updated last year
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 6 months ago
- The official implementation of the paper "AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World …☆36Mar 4, 2026Updated last month
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆68Nov 10, 2025Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆671Feb 22, 2025Updated last year
- An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)☆114Jan 21, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 6 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆515Mar 30, 2026Updated last week
- Code used to run the platform for the LLM CTF colocated with SaTML 2024☆28Mar 20, 2024Updated 2 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆832Mar 30, 2026Updated last week
- The automated prompt injection framework for LLM-integrated applications.☆258Sep 12, 2024Updated last year
- A curated collection of papers and related projects on using LLMs for privacy.☆29Oct 8, 2025Updated 6 months ago
- ☆129Jul 2, 2024Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 11 months ago
- ☆39Jul 31, 2025Updated 8 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆86Sep 1, 2025Updated 7 months ago
- Implementations of 3 phishing detection and identification baselines☆21Nov 25, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,064Jul 17, 2025Updated 8 months ago
- [ICLR 2026] The official code for "Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models"☆26Feb 7, 2026Updated 2 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆45Apr 21, 2024Updated last year
- [ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms☆39Jun 4, 2025Updated 10 months ago
- Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language Models☆32Oct 6, 2025Updated 6 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆98Oct 15, 2023Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆894Aug 16, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆226Dec 10, 2024Updated last year
- ☆21Jan 6, 2025Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆564Apr 4, 2025Updated last year