michael-wzhu / PromptCBLUE
PromptCBLUE: a large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain in Chinese
☆323Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for PromptCBLUE
- Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调☆228Updated 5 months ago
- A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.☆311Updated 10 months ago
- CMB, A Comprehensive Medical Benchmark in Chinese☆134Updated 7 months ago
- 🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答☆297Updated last year
- The Largest-scale Chinese Medical QA Dataset: with 26,000,000 question answer pairs.☆221Updated 7 months ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆241Updated last year
- 开源SFT数据集整理,随时补充☆440Updated last year
- 各大顶会医疗领域NLP论文与资源。NLP papers and resources in the medical field.☆103Updated last year
- This is the repo of the medical dialogue dataset 'imcs21' in CBLUE@Tianchi☆80Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆395Updated last year
- Biomedical LLM, A Bilingual (Chinese and English) Fine-Tuned Large Language Model for Diverse Biomedical Tasks☆140Updated 3 weeks ago
- RAGOnMedicalKG,将大模型RAG与KG结合,完成demo级问答,旨在给出基础的思路。☆195Updated 7 months ago
- This is updated version of the dataset for Chinese community medical question answering.☆309Updated 5 years ago
- 语言模型中文认知能力分析☆235Updated last year
- llm-medical-data:用于大模型微调训练的医疗数据集☆68Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆209Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆202Updated 11 months ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆111Updated last month
- 微调ChatGLM☆123Updated last year
- Repository of DISC-MedLLM, it is a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful me…☆486Updated last year
- 雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)☆269Updated 3 months ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆166Updated 5 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆354Updated last year
- llama信息抽取实战☆97Updated last year
- Universal information extraction with instruction learning☆371Updated 10 months ago
- 本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等),将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。☆463Updated 6 months ago
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆189Updated last year
- ☆81Updated 7 months ago
- ☆157Updated last year