official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries
☆64Nov 10, 2025Updated 3 months ago
Alternatives and similar repositories for StruQ
Users that are interested in StruQ are comparing it to the libraries listed below
Sorting:
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆87Jul 24, 2025Updated 7 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆454Feb 3, 2026Updated last month
- PFI: Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents☆26Mar 26, 2025Updated 11 months ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- [EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents☆16Sep 16, 2025Updated 5 months ago
- ☆48Feb 25, 2026Updated last week
- ☆10Oct 31, 2022Updated 3 years ago
- Dataset for the Tensor Trust project☆48Mar 17, 2024Updated last year
- ☆14Feb 8, 2025Updated last year
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Feb 14, 2023Updated 3 years ago
- ☆29Oct 27, 2023Updated 2 years ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆59Mar 11, 2025Updated 11 months ago
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆15Dec 16, 2025Updated 2 months ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Jun 12, 2023Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆118Jul 2, 2024Updated last year
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆76Mar 1, 2025Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- ☆45Jan 21, 2026Updated last month
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38May 25, 2021Updated 4 years ago
- Source code for "Neural Anisotropy Directions"☆16Nov 17, 2020Updated 5 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 8 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- ☆19Feb 25, 2024Updated 2 years ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆23Oct 23, 2024Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Apr 15, 2025Updated 10 months ago
- Awesome LLM Jailbreak academic papers☆127Nov 3, 2023Updated 2 years ago
- ☆46Feb 13, 2026Updated 3 weeks ago
- Whispers in the Machine: Confidentiality in Agentic Systems☆41Dec 11, 2025Updated 2 months ago
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Jul 20, 2021Updated 4 years ago
- This is the code for semi-supervised robust training (SRT).☆18Mar 24, 2023Updated 2 years ago
- ☆20Jan 15, 2024Updated 2 years ago
- ☆18Mar 30, 2025Updated 11 months ago
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆36Aug 4, 2025Updated 7 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆106Apr 15, 2024Updated last year
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 8 months ago
- ☆22Sep 17, 2024Updated last year
- ☆20May 6, 2022Updated 3 years ago