RobotSe7en / GOATLinks
GOAT(山羊)是中英文大语言模型,基于LlaMa进行SFT。
☆12Updated 2 years ago
Alternatives and similar repositories for GOAT
Users that are interested in GOAT are comparing it to the libraries listed below
Sorting:
- moss chat finetuning☆51Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated 2 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆76Updated 3 years ago
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆137Updated 2 years ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- The Corpus & Code for EMNLP 2022 paper "FCGEC: Fine-Grained Corpus for Chinese Grammatical Error Correction" | FCGEC中文语法纠错语料及STG模型☆120Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆89Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- make LLM easier to use☆59Updated 2 years ago
- ☆23Updated 2 years ago
- ☆44Updated 2 years ago
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆15Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- use chatGLM to perform text embedding☆45Updated 2 years ago
- llama信息抽取实战☆102Updated 2 years ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆51Updated 2 years ago
- LORA微调BLOOMZ,参考BELLE☆25Updated 2 years ago
- deep learning☆149Updated 7 months ago
- ☆60Updated 3 years ago
- 时间抽取、解析、标准化工具☆55Updated 3 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆165Updated 2 years ago
- ☆27Updated 2 years ago
- benchmark of KgCLUE, with different models and methods☆28Updated 4 years ago
- An open-source and powerful Information Extraction toolkit based on GPT (GPT for Information Extraction; GPT4IE for short)。Note: we set a…☆176Updated 2 years ago
- GLM (General Language Model)☆24Updated 3 years ago
- ChatGLM-6B fine-tuning.☆137Updated 2 years ago
- 格物-多语言和中文大规模预训练模型-轻量版,涵盖纯中文、知识增强、113个语种多语言,采用主流Roberta架构,适用于NLU和NLG任务, 支持pytorch、tensorflow、uer、huggingface等框架。 Multilingual and Chinese …☆30Updated 3 years ago
- Finetune CPM-2☆81Updated 2 years ago
- 用于微调LLM的中文指令数据集☆28Updated 2 years ago