thu-coai / Safety-PromptsLinks
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
☆1,034Updated last year
Alternatives and similar repositories for Safety-Prompts
Users that are interested in Safety-Prompts are comparing it to the libraries listed below
Sorting:
- 面向中文大模型价值观的评估与对齐研究☆522Updated last year
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆416Updated 3 months ago
- 开源SFT数据集整理,随时补充☆524Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆394Updated 10 months ago
- SC-Safety: 中文大模型多轮对抗安全基准☆137Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆766Updated 6 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Updated 2 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,741Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,179Updated last month
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆225Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,005Updated last year
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆659Updated 5 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆198Updated 8 months ago
- ☆515Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- 语言模型中文认知能力分析☆236Updated last year
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆441Updated last month
- 《ChatGPT原理与实战:大型语言模型的算法、技术和私有化》☆360Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆495Updated 2 years ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆541Updated 8 months ago
- 基于ChatGPT构建的中文self-instruct数据集☆118Updated 2 years ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆552Updated 11 months ago
- ☆308Updated 2 years ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆203Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆605Updated 5 months ago
- ☆338Updated last year
- 活字通用大模型☆390Updated 9 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆360Updated last year
- Awesome LLM Benchmarks to evaluate the LLMs across text, code, image, audio, video and more.☆142Updated last year
- 该仓库主要记录 LLMs 算法工程师相关的顶会论文研读笔记(多模态、PEFT、小样本QA问答、RAG、LMMs可解释性、Agents、CoT)☆339Updated last year