hikariming / chat-dataset-baselineLinks
人工精调的中文对话数据集和一段chatglm的微调代码
☆1,179Updated last month
Alternatives and similar repositories for chat-dataset-baseline
Users that are interested in chat-dataset-baseline are comparing it to the libraries listed below
Sorting:
- chatglm 6b finetuning and alpaca finetuning☆1,544Updated 3 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆720Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,005Updated last year
- 为ChatGLM设计的微调数据集生成工具,速来制作自己的猫娘。☆606Updated last year
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆742Updated last year
- TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet and so on. 文本生成模型,实现了包括LLaMA,ChatGLM,BLO…☆964Updated 9 months ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,054Updated last year
- 骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆721Updated 2 years ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆360Updated last year
- Repo for Chinese Medical ChatGLM 基于中文医学知识的ChatGLM指令微调☆1,007Updated 2 years ago
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,749Updated last year
- 基于ChatGLM-6B + LoRA的Fintune方案☆3,766Updated last year
- alpaca中文指令微调数据集☆392Updated 2 years ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆605Updated 5 months ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调 表现☆390Updated 2 years ago
- chatglm多gpu用deepspeed和☆409Updated 11 months ago
- unified embedding model☆864Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆399Updated last year
- 基于ChatGLM-6B的中文问诊模型☆815Updated last year
- 聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)☆636Updated last year
- 开源SFT数据集整理,随时补充☆524Updated 2 years ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,751Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,741Updated last year
- Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调☆3,709Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆495Updated 2 years ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,073Updated 10 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆766Updated 6 months ago
- TigerBot: A multi-language multi-task LLM☆2,254Updated 5 months ago
- 中文法律LLaMA (LLaMA for Chinese legel domain)☆949Updated 9 months ago