HIT-SCIR / huozi
活字通用大模型
☆382Updated 5 months ago
Alternatives and similar repositories for huozi:
Users that are interested in huozi are comparing it to the libraries listed below
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆406Updated last year
- 开源SFT数据集整理,随时补充☆491Updated last year
- A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.☆340Updated last year
- ☆197Updated last year
- Alpaca Chinese Dataset -- 中文指令微调数据集☆192Updated 4 months ago
- PromptCBLUE: a large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain in Chinese☆341Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆212Updated last year
- 语言模型中文认知能力分析☆236Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆247Updated last year
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆357Updated last year
- ☆299Updated 8 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆656Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆211Updated last year
- 雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)☆289Updated 6 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆609Updated last month
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆399Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆363Updated 6 months ago
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆648Updated 6 months ago
- The Largest-scale Chinese Medical QA Dataset: with 26,000,000 question answer pairs.☆243Updated 11 months ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆533Updated 7 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆401Updated 10 months ago
- pCLUE: 1000000+多任务提示学习数据集☆483Updated 2 years ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆186Updated 9 months ago
- Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调☆256Updated 8 months ago
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆985Updated 10 months ago
- 🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答☆315Updated last year
- chatglm多gpu用deepspeed和☆405Updated 7 months ago
- 夫子•明察司法大模型是由山东大学、浪潮云、中国政法大学联合研发,以 ChatGLM 为大模型底座,基于海量中文无监督司法语料与有监督司法微调数据训练的中文司法大模型。该模型支持法条检索、案例分析、三段论推理判决以及司法对话等功能,旨在为用户提供全方位、高精准的法律咨询与解答…☆309Updated 4 months ago
- 本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等),将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。☆535Updated 10 months ago
- An open-source educational chat model from ICALK, East China Normal University. 开源中英教育对话大模型。(通用基座模型,GPU部署,数据清理) 致敬: LLaMA, MOSS, BELLE, Z…☆756Updated 4 months ago