ydyjya / Awesome-LLM-SafetyView external linksLinks
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
☆1,771Feb 1, 2026Updated 2 weeks ago
Alternatives and similar repositories for Awesome-LLM-Safety
Users that are interested in Awesome-LLM-Safety are comparing it to the libraries listed below
Sorting:
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,860Jan 24, 2026Updated 3 weeks ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,525Aug 20, 2025Updated 5 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆563Jun 8, 2025Updated 8 months ago
- Accepted by IJCAI-24 Survey Track☆231Aug 25, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 10 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Aug 7, 2024Updated last year
- ☆57Jun 13, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆490Jan 27, 2026Updated 3 weeks ago
- Universal and Transferable Attacks on Aligned Language Models☆4,493Aug 2, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- A survey on harmful fine-tuning attack for large language model☆232Jan 9, 2026Updated last month
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆530Apr 4, 2025Updated 10 months ago
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆1,205Feb 6, 2026Updated last week
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆854Aug 16, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆227Feb 3, 2026Updated 2 weeks ago
- ☆696Jul 2, 2025Updated 7 months ago
- Accepted by ECCV 2024☆187Oct 15, 2024Updated last year
- ☆74Jan 21, 2026Updated 3 weeks ago
- ☆174Oct 31, 2025Updated 3 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆302Jan 11, 2026Updated last month
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆618Jun 24, 2025Updated 7 months ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,127Feb 27, 2024Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Mar 8, 2025Updated 11 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆565Sep 24, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆317May 13, 2025Updated 9 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆272Jul 28, 2025Updated 6 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,582Nov 24, 2025Updated 2 months ago
- ☆193Nov 26, 2023Updated 2 years ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆812May 21, 2025Updated 8 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 4 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 9 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 7 months ago