niconi19 / LLM-Conversation-SafetyView external linksLinks
[NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey
☆109Aug 7, 2024Updated last year
Alternatives and similar repositories for LLM-Conversation-Safety
Users that are interested in LLM-Conversation-Safety are comparing it to the libraries listed below
Sorting:
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,856Jan 24, 2026Updated 3 weeks ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 9 months ago
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆18Aug 22, 2024Updated last year
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆27Apr 2, 2025Updated 10 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆39Aug 2, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆99Jan 11, 2026Updated last month
- ☆696Jul 2, 2025Updated 7 months ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆85May 9, 2025Updated 9 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Dec 23, 2023Updated 2 years ago
- ☆13Jun 17, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated last year
- ☆164Sep 2, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Jan 22, 2025Updated last year
- KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality☆40Dec 1, 2025Updated 2 months ago
- A framework to train language models to learn invariant representations.☆14Jan 24, 2022Updated 4 years ago
- This repo support auto line plot for multi-seed event file from TensorBoard☆12Jun 23, 2022Updated 3 years ago
- ☆11Oct 3, 2021Updated 4 years ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Oct 7, 2025Updated 4 months ago
- ☆57Jun 13, 2024Updated last year
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Oct 8, 2025Updated 4 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,493Aug 2, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆97Mar 7, 2024Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆561Jun 8, 2025Updated 8 months ago
- ☆20Oct 28, 2025Updated 3 months ago
- code released for our TIP 2021 paper "Adversarial Domain Adaptation with Prototype-based Normalized Output Conditioner"☆15May 24, 2023Updated 2 years ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated last year
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Jan 30, 2023Updated 3 years ago
- Applying Reinforcement Learning from Human Feedback to language models to teach them to write short story responses to writing prompts.☆14May 5, 2022Updated 3 years ago
- ☆74Jan 21, 2026Updated 3 weeks ago
- Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆18Oct 7, 2025Updated 4 months ago
- ☆18Mar 25, 2024Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆619Jun 24, 2025Updated 7 months ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆65Oct 27, 2024Updated last year