ydyjya / Awesome-LLM-SafetyLinks
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
☆1,622Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LLM-Safety
Users that are interested in Awesome-LLM-Safety are comparing it to the libraries listed below
Sorting:
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,691Updated this week
- Papers and resources related to the security and privacy of LLMs 🤖☆536Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,410Updated last month
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,046Updated last week
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆393Updated last week
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆458Updated 3 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆724Updated 6 months ago
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,167Updated 2 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆383Updated 8 months ago
- A resource repository for machine unlearning in large language models☆493Updated 2 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆194Updated 7 months ago
- An Awesome Collection for LLM Survey☆378Updated 4 months ago
- awesome papers in LLM interpretability☆555Updated last month
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆424Updated 6 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆251Updated 2 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆202Updated 7 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- ☆13Updated 9 months ago
- Daily updated LLM papers. 每日更新 LLM 相关的论文,欢迎订阅 👏 喜欢的话动动你的小手 🌟 一个☆1,184Updated last year
- Accepted by IJCAI-24 Survey Track☆216Updated last year
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆356Updated 9 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆211Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆598Updated 3 months ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆218Updated 2 weeks ago
- A survey on harmful fine-tuning attack for large language model☆212Updated this week
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆529Updated last year
- SecProbe:任务驱动式大模型安全能力 评测系统☆14Updated 10 months ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,081Updated last year
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆939Updated last month
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆149Updated 10 months ago