mymusise / gpt2-quicklyLinks
☆141Updated 2 years ago
Alternatives and similar repositories for gpt2-quickly
Users that are interested in gpt2-quickly are comparing it to the libraries listed below
Sorting:
- ☆219Updated 2 years ago
- Pytorch model for https://github.com/imcaspar/gpt2-ml☆78Updated 3 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆172Updated last year
- Easy-to-use CPM for Chinese text generation(基于CPM的中文文本生成)☆535Updated 2 years ago
- clueai工具包: 3行代码3分钟,自定义需要的API!☆231Updated 2 years ago
- GPT2 training script for Chinese in Tensorflow 2.0☆153Updated 3 years ago
- ☆247Updated 2 years ago
- dialogbot, provide search-based dialogue, task-based dialogue and generative dialogue model. 对话机器人,基于问答型对话、任务型对话、聊天型对话等模型实现,支持网络检索问答,领域知识…☆333Updated last year
- PERT: Pre-training BERT with Permuted Language Model☆365Updated 2 months ago
- ChatGLM-6B fine-tuning.☆136Updated 2 years ago
- 中文AI写作(写诗或写对联)☆123Updated last year
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆127Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated 2 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆389Updated 2 years ago
- Humanable Chat Generative-model Fine-tuning | LLM微调☆207Updated last year
- ☆102Updated 4 years ago
- LERT: A Linguistically-motivated Pre-trained Language Model(语言学信息增强的预训练模型LERT)☆219Updated 2 months ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆84Updated 2 years ago
- A framework for cleaning Chinese dialog data☆273Updated 4 years ago
- 比Sentence-BERT更有效的句向量方案☆375Updated 2 years ago
- pCLUE: 1000000+多任务提示学习数据集☆500Updated 2 years ago
- 基于GPT2的中文摘要生成模型☆408Updated 2 years ago
- 中文聊天小模型,用t5 base在大量数据上有监督。☆101Updated last year
- PromptCLUE, 全中文任务支持零样本学习模型☆664Updated 2 years ago
- 提供一款中文版生成式摘要服务☆347Updated 2 weeks ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆76Updated 2 years ago
- EVA: Large-scale Pre-trained Chit-Chat Models☆307Updated 2 years ago
- 中文纠错☆93Updated 3 years ago