Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"
☆86Jul 24, 2025Updated 7 months ago
Alternatives and similar repositories for SecAlign
Users that are interested in SecAlign are comparing it to the libraries listed below
Sorting:
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆63Nov 10, 2025Updated 3 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆395Oct 29, 2025Updated 4 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆14Mar 9, 2025Updated 11 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆106Apr 15, 2024Updated last year
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆28Feb 17, 2026Updated last week
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆178May 6, 2024Updated last year
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆70Aug 14, 2025Updated 6 months ago
- Repo for the paper "Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks".☆51Updated this week
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 4 months ago
- ☆28Sep 11, 2025Updated 5 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆80Sep 1, 2025Updated 6 months ago
- Official repo for FSE'24 paper "CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking"☆18Mar 10, 2025Updated 11 months ago
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Updated this week
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆116Jun 13, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆18Apr 15, 2025Updated 10 months ago
- [ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms☆35Jun 4, 2025Updated 8 months ago
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆32Aug 4, 2025Updated 6 months ago
- ☆37Dec 19, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- ☆44Feb 26, 2025Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- [EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents☆16Sep 16, 2025Updated 5 months ago
- ☆117Jul 2, 2024Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Sep 11, 2023Updated 2 years ago
- ☆30Jun 19, 2023Updated 2 years ago
- Agent Security Bench (ASB)☆186Oct 27, 2025Updated 4 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆339Feb 23, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆318May 13, 2025Updated 9 months ago
- PhishDecloaker: Detecting CAPTCHA-cloaked Phishing Websites via Hybrid Vision-based Interactive Models☆14Jan 3, 2025Updated last year
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- ☆698Jul 2, 2025Updated 7 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆87May 9, 2025Updated 9 months ago