ydyjya / Awesome-LLM-Safety
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
☆1,288Updated 2 weeks ago
Alternatives and similar repositories for Awesome-LLM-Safety:
Users that are interested in Awesome-LLM-Safety are comparing it to the libraries listed below
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,304Updated this week
- Papers and resources related to the security and privacy of LLMs 🤖☆491Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,144Updated this week
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆249Updated 2 weeks ago
- An Awesome Collection for LLM Survey☆333Updated 6 months ago
- Daily updated LLM papers. 每日更新 LLM 相关的论文,欢迎订阅 👏 喜欢的话动动你的小手 🌟 一个☆1,096Updated 8 months ago
- A resource repository for machine unlearning in large language models☆361Updated this week
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆310Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆198Updated 7 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,001Updated 4 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆599Updated this week
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,044Updated 3 weeks ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆335Updated 3 months ago
- MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 Demo)☆363Updated 2 weeks ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆471Updated 6 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆129Updated last month
- awesome papers in LLM interpretability☆430Updated 2 months ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆733Updated last month
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆432Updated 2 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆320Updated last week
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆179Updated 6 months ago
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆565Updated last week
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆540Updated 2 weeks ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆391Updated 2 weeks ago
- LLM hallucination paper list☆312Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆969Updated last year
- Aligning Large Language Models with Human: A Survey☆726Updated last year
- A survey on harmful fine-tuning attack for large language model☆154Updated this week
- Collecting awesome papers of RAG for AIGC. We propose a taxonomy of RAG foundations, enhancements, and applications in paper "Retrieval-…☆1,561Updated 7 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆282Updated last year