whitzard-ai / jade-db
"他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB
☆396Updated last month
Alternatives and similar repositories for jade-db:
Users that are interested in jade-db are comparing it to the libraries listed below
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆987Updated last year
- 面向中文大模型价值观的评估与对齐研究☆504Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆131Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆211Updated 9 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆77Updated last month
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆184Updated 6 months ago
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆36Updated 8 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆473Updated 6 months ago
- The official repository of the paper: COLD: A Benchmark for Chinese Offensive Language Detection☆263Updated 2 years ago
- AutoAudit—— the LLM for Cyber Security 网络安全大语言模型☆323Updated last month
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆140Updated last year
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆178Updated 3 months ago
- 针对大语言模型的对抗性攻击总结☆25Updated last year
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆46Updated 10 months ago
- 开源SFT数据集整理,随时补充☆504Updated last year
- ☆113Updated last month
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆622Updated 2 weeks ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆373Updated 7 months ago
- S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models☆60Updated last month
- ☆121Updated 6 months ago
- SecProbe:任务驱动式大模型安全能力评测系统☆13Updated 4 months ago
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆99Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆331Updated 9 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆631Updated 3 months ago
- Hide and Seek (HaS): A Framework for Prompt Privacy Protection☆39Updated last year
- 基于ChatGPT构建的中文self-instruct数据集☆116Updated last year
- MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 Demo)☆371Updated last month
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆86Updated 6 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆751Updated 4 months ago
- ☆511Updated 10 months ago