RobotSe7en / GOAT
GOAT(山羊)是中英文大语言模型,基于LlaMa进行SFT。
☆12Updated last year
Related projects ⓘ
Alternatives and complementary repositories for GOAT
- moss chat finetuning☆50Updated 6 months ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆46Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆74Updated last year
- ☆24Updated last year
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆14Updated last year
- benchmark of KgCLUE, with different models and methods☆26Updated 2 years ago
- 用于微调LLM的中文指令数据集☆27Updated last year
- make LLM easier to use☆58Updated last year
- 大语言模型训练和服务调研☆34Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated 7 months ago
- chatglm_rlhf_finetuning☆27Updated last year
- 零样本学习测评基准,中文版☆54Updated 3 years ago
- deep training task☆29Updated last year
- ☆93Updated 8 months ago
- BLOOM 模型的指令微调☆24Updated last year
- 格物-多语言和中文大规模预训练模型-轻量版,涵盖纯中文、知识增强、113个语种多语言,采用主流Roberta架构,适用于NLU和NLG任务, 支持pytorch、tensorflow、uer、huggingface等框架。 Multilingual and Chinese …☆26Updated 2 years ago
- 多轮共情对话模型PICA☆86Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- 时间抽取、解析、标准化工具☆49Updated 2 years ago
- CAIL 2023☆39Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆106Updated last year
- GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2☆78Updated last year
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆135Updated last year
- ☆43Updated 11 months ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated 2 years ago
- LORA微调BLOOMZ,参考BELLE☆25Updated last year
- ☆59Updated last year