ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]
☆226Sep 29, 2024Updated last year
Alternatives and similar repositories for ShieldLM
Users that are interested in ShieldLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,136Feb 27, 2024Updated 2 years ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆276Jul 28, 2025Updated 7 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Mar 8, 2025Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆113Dec 2, 2024Updated last year
- ☆124Feb 3, 2025Updated last year
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆114Dec 15, 2025Updated 3 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆170Mar 3, 2025Updated last year
- ☆14Feb 26, 2025Updated last year
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆150Mar 15, 2024Updated 2 years ago
- 面向中文大模型价值观的评估与对齐研究☆554Jul 20, 2023Updated 2 years ago
- ☆48Jul 14, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆500Nov 18, 2025Updated 4 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- Accepted by ECCV 2024☆193Oct 15, 2024Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆176Oct 27, 2023Updated 2 years ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Oct 11, 2024Updated last year
- ☆60Jun 5, 2024Updated last year
- Code implementation of R^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning☆22Jul 8, 2024Updated last year
- [ICLR 2026] BARREL: Boundary-Aware Reasoning for Factual and Reliable LRMs☆17May 21, 2025Updated 10 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆322May 13, 2025Updated 10 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆553Apr 4, 2025Updated 11 months ago
- ☆61May 21, 2025Updated 10 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆62May 21, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆173Feb 20, 2024Updated 2 years ago
- ☆704Jul 2, 2025Updated 8 months ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆235Aug 29, 2025Updated 6 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆130Feb 24, 2025Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆242Mar 18, 2026Updated last week
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆63Mar 2, 2026Updated 3 weeks ago
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 11 months ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 6 months ago
- ☆19May 14, 2025Updated 10 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year