corca-ai / awesome-llm-securityView external linksLinks
A curation of awesome tools, documents and projects about LLM Security.
☆1,525Aug 20, 2025Updated 5 months ago
Alternatives and similar repositories for awesome-llm-security
Users that are interested in awesome-llm-security are comparing it to the libraries listed below
Sorting:
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,769Feb 1, 2026Updated last week
- Papers and resources related to the security and privacy of LLMs 🤖☆561Jun 8, 2025Updated 8 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,856Jan 24, 2026Updated 3 weeks ago
- Universal and Transferable Attacks on Aligned Language Models☆4,493Aug 2, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 10 months ago
- New ways of breaking app-integrated LLMs☆2,052Jul 17, 2025Updated 6 months ago
- the LLM vulnerability scanner☆6,948Feb 5, 2026Updated last week
- LLM security and privacy☆53Oct 15, 2024Updated last year
- ☆696Jul 2, 2025Updated 7 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆339Feb 12, 2024Updated 2 years ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Aug 16, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆565Sep 24, 2024Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆454Jan 31, 2024Updated 2 years ago
- ☆193Nov 26, 2023Updated 2 years ago
- LLM Prompt Injection Detector☆1,415Aug 7, 2024Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆644Dec 10, 2025Updated 2 months ago
- Payloads for Attacking Large Language Models☆119Jan 13, 2026Updated last month
- The Security Toolkit for LLM Interactions☆2,511Dec 15, 2025Updated last month
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆454Feb 26, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Set of tools to assess and improve LLM security.☆4,020Updated this week
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 7 months ago
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆1,205Feb 6, 2026Updated last week
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆317May 13, 2025Updated 9 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆116Jun 13, 2024Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Oct 13, 2023Updated 2 years ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆391Oct 29, 2025Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆490Jan 27, 2026Updated 2 weeks ago
- The automated prompt injection framework for LLM-integrated applications.☆253Sep 12, 2024Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆52Jul 11, 2023Updated 2 years ago
- An overview of LLMs for cybersecurity.☆1,212Updated this week
- ☆122Nov 13, 2023Updated 2 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year