wind91725 / gpt2-ml-finetune-Links
根据gpt2-ml中文模型finetune自己的数据集
☆43Updated 2 years ago
Alternatives and similar repositories for gpt2-ml-finetune-
Users that are interested in gpt2-ml-finetune- are comparing it to the libraries listed below
Sorting:
- Chinese GPT2: pre-training and fine-tuning framework for text generation☆187Updated 4 years ago
- Chinese Transformer Generative Pre-Training Model☆59Updated 5 years ago
- ☆101Updated 4 years ago
- lasertagger-chinese;lasertagger中文学习案例,案例数据,注释,shell运行☆75Updated 2 years ago
- Pytorch model for https://github.com/imcaspar/gpt2-ml☆79Updated 3 years ago
- 中文生成式预训练模型☆98Updated 4 years ago
- Unilm for Chinese Chitchat Robot.基于Unilm模型的夸夸式闲聊机器人项目。☆157Updated 4 years ago
- 在bert4keras下加载CPM_LM模型☆51Updated 4 years ago
- Modify Chinese text, modified on LaserTagger Model. I name it "文本手术刀".目前,本项目实现了一个文本复述任务,用于NLP语料的数据增强。☆214Updated 2 years ago
- Finetune CPM-1☆74Updated 2 years ago
- ☆75Updated 5 years ago
- Bert finetune for CMRC2018, CJRC, DRCD, CHID, C3☆183Updated 5 years ago
- Chinese Language Generation Evaluation 中文生成任务基准测评☆247Updated 4 years ago
- 基于transformer的指针生成网络☆93Updated 4 years ago
- 这是一个用于解决生成在生成任务中(翻译,复述等等),多样性不足问题的模型。☆45Updated 5 years ago
- 用bert4keras加载CDial-GPT☆38Updated 4 years ago
- QA、CHAT、Task-Oriented简单的demo实现☆26Updated 5 years ago
- 整理一下在keras中使用T5模型的要点☆172Updated 3 years ago
- CLUE baseline pytorch CLUE的pytorch版本基线☆74Updated 5 years ago
- 中文版unilm预训练模型☆83Updated 4 years ago
- 使用BERT解决lic2019机器阅读理解☆89Updated 6 years ago
- ☆89Updated 5 years ago
- 中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large☆230Updated 5 years ago
- Code for NLPCC2017 paper "Large-scale Simple Question Generation by Template-based Seq2seq Learning"☆84Updated 7 years ago
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 5 years ago
- Collections of Chinese reading comprehension datasets☆217Updated 5 years ago
- 教育行业新闻 自动文摘 语料库 自动摘要☆200Updated 7 years ago
- 中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model☆140Updated 5 years ago
- 这是一个seq2seq模型,编码器是bert,解码器是transformer的解码器,可用于自然语言处理中文本生成领域的任务☆71Updated 5 years ago
- ☆219Updated 5 years ago