ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]
☆228Sep 29, 2024Updated last year
Alternatives and similar repositories for ShieldLM
Users that are interested in ShieldLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,151Feb 27, 2024Updated 2 years ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆283Jul 28, 2025Updated 9 months ago
- ☆130Feb 3, 2025Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆175Mar 8, 2025Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆121Dec 2, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆113Dec 15, 2025Updated 4 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆174Mar 3, 2025Updated last year
- ☆14Feb 26, 2025Updated last year
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆150Mar 15, 2024Updated 2 years ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- 面向中文大模型价值观的评估与对齐研究☆556Jul 20, 2023Updated 2 years ago
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆48Jul 14, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆579Feb 27, 2026Updated 2 months ago
- ☆65May 21, 2025Updated 11 months ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 9 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆507Nov 18, 2025Updated 5 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆104Mar 7, 2024Updated 2 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆848Mar 30, 2026Updated last month
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆181Oct 27, 2023Updated 2 years ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆112Oct 11, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆60Jun 5, 2024Updated last year
- Code implementation of R^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning☆22Jul 8, 2024Updated last year
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆63May 21, 2024Updated last year
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆17Mar 31, 2025Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]