Papers and resources related to the security and privacy of LLMs π€
β576Jun 8, 2025Updated 11 months ago
Alternatives and similar repositories for llm-sp
Users that are interested in llm-sp are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).β1,950May 2, 2026Updated last week
- A curation of awesome tools, documents and projects about LLM Security.β1,578Aug 20, 2025Updated 8 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMsβ434Oct 29, 2025Updated 6 months ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provideβ¦β1,841Updated this week
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]β589Apr 4, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean β’ AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- An easy-to-use Python framework to generate adversarial jailbreak prompts.β848Mar 30, 2026Updated last month
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Mβ¦β440Jan 22, 2025Updated last year
- LLM security and privacyβ54Oct 15, 2024Updated last year
- Universal and Transferable Attacks on Aligned Language Modelsβ4,644Aug 2, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)β162Nov 30, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]β385Jan 23, 2025Updated last year
- β732Jul 2, 2025Updated 10 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusalβ940Aug 16, 2024Updated last year
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, dataβ¦β1,338Mar 30, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ580Feb 27, 2026Updated 2 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decodingβ152Jul 19, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Promptsβ202Jun 26, 2025Updated 10 months ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Modelsβ19Feb 18, 2025Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.β191Apr 1, 2025Updated last year
- We jailbreak GPT-3.5 Turboβs safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20β¦β350Feb 23, 2024Updated 2 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Modelsβ273May 13, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.β60Sep 11, 2025Updated 7 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllabilityβ177Dec 18, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- π up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.β541Apr 17, 2026Updated 3 weeks ago
- New ways of breaking app-integrated LLMsβ2,083Jul 17, 2025Updated 9 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".β70Oct 23, 2024Updated last year
- β20Feb 3, 2025Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873β181May 6, 2024Updated 2 years ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Modelsβ56Aug 17, 2024Updated last year
- β60Jun 13, 2024Updated last year
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacksβ18Apr 24, 2024Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Modelsβ36Jun 1, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.β115Jun 13, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorchβ331May 13, 2025Updated 11 months ago
- β73Feb 16, 2025Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Mergingβ16Oct 14, 2024Updated last year
- β166Sep 2, 2024Updated last year
- β200Nov 26, 2023Updated 2 years ago
- Must-read Papers on Textual Adversarial Attack and Defenseβ1,572Jun 4, 2025Updated 11 months ago