WangRongsheng / MedQA-ChatGLMLinks
🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答
☆336Updated 2 years ago
Alternatives and similar repositories for MedQA-ChatGLM
Users that are interested in MedQA-ChatGLM are comparing it to the libraries listed below
Sorting:
- A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.☆384Updated last year
- Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调☆288Updated last year
- The Largest-scale Chinese Medical QA Dataset: with 26,000,000 question answer pairs.☆301Updated last year
- 基于ChatGLM-6B的中文问诊模型☆829Updated 2 years ago
- PromptCBLUE: a large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain in Chinese☆382Updated last year
- llm-medical-data:用于大模型微调训练的医疗数据集☆133Updated 2 years ago
- ChatMed: 中文医疗大模型,善于在线回答患者/用户的日常医疗相关问题!☆606Updated 2 years ago
- ☆109Updated last year
- Repository of DISC-MedLLM, it is a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful me…☆556Updated 2 years ago
- Repo for Chinese Medical ChatGLM 基于中文医学知识的ChatGLM指令微调☆1,027Updated 2 years ago
- 微调ChatGLM☆128Updated 2 years ago
- WiNGPT是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。☆415Updated last year
- Repo for ShenNong-TCM-LLM (“神农”大模型,首个中医药中文大模型)☆435Updated 2 years ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆213Updated last year
- CMB, A Comprehensive Medical Benchmark in Chinese☆216Updated 8 months ago
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆401Updated 2 years ago
- chatglm多gpu用deepspeed和☆411Updated last year
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆356Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型