thu-coai / Safety-PromptsLinks
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
☆1,100Updated last year
Alternatives and similar repositories for Safety-Prompts
Users that are interested in Safety-Prompts are comparing it to the libraries listed below
Sorting:
- 面向中文大模型价值观的评估与对齐研究☆544Updated 2 years ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆474Updated last week
- SC-Safety: 中文大模型多轮对抗安全基准☆146Updated last year
- 开源SFT数据集整理,随时补充☆557Updated 2 years ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆261Updated 3 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆216Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆792Updated 11 months ago
- 大模型多维度中文 对齐评测基准 (ACL 2024)☆421Updated 3 weeks ago
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆153Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,780Updated 3 months ago
- The official repository of the paper: COLD: A Benchmark for Chinese Offensive Language Detection☆298Updated 2 years ago
- ☆532Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 7 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆695Updated 10 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆580Updated this week
- 活字通用大模型☆391Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,019Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,196Updated 6 months ago
- ☆922Updated last year
- 基于ChatGPT构建的中文self-instruct数据集☆119Updated 2 years ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆617Updated 10 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆356Updated 2 years ago
- An Open-sourced Knowledgable Large Language Model Framework.☆1,356Updated 10 months ago
- 本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等),将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。☆624Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,080Updated last year
- an intro to retrieval augmented large language model☆304Updated 2 years ago
- pCLUE: 1000000+多任务提示学习数据集☆501Updated 3 years ago
- ☆358Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆213Updated last year