[NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey
☆111Aug 7, 2024Updated last year
Alternatives and similar repositories for LLM-Conversation-Safety
Users that are interested in LLM-Conversation-Safety are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,827Apr 3, 2026Updated 2 weeks ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,937Apr 2, 2026Updated 2 weeks ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 11 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆21Mar 25, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆39Aug 2, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆579Apr 4, 2025Updated last year
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆19Aug 22, 2024Updated last year
- ☆51Dec 24, 2024Updated last year
- LiveSecBench:动态中文大模型安全榜单☆28Mar 9, 2026Updated last month
- ☆719Jul 2, 2025Updated 9 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 11 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆323Jun 7, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A Domain-Specific Language, Jailbreak Attack Synthesizer and Dynamic LLM Redteaming Toolkit☆27Dec 5, 2024Updated last year
- Official Repo of Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents☆72Oct 28, 2025Updated 5 months ago
- ☆24Dec 8, 2024Updated last year
- ☆78Jan 21, 2026Updated 2 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆381Jan 23, 2025Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆110Mar 8, 2024Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 10 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- ☆19Oct 2, 2023Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Universal and Transferable Attacks on Aligned Language Models☆4,613Aug 2, 2024Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Apr 8, 2026Updated last week
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Dec 23, 2023Updated 2 years ago
- ☆165Sep 2, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"