wind91725 / gpt2-ml-finetune-
根据gpt2-ml中文模型finetune自己的数据集
☆43Updated last year
Alternatives and similar repositories for gpt2-ml-finetune-
Users that are interested in gpt2-ml-finetune- are comparing it to the libraries listed below
Sorting:
- Chinese GPT2: pre-training and fine-tuning framework for text generation☆188Updated 3 years ago
- Chinese Transformer Generative Pre-Training Model☆59Updated 5 years ago
- 在bert4keras下加载CPM_LM模型☆51Updated 4 years ago
- 中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large☆230Updated 5 years ago
- 中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model☆140Updated 5 years ago
- lasertagger-chinese;lasertagger中文学习案例,案例数据,注释,shell运行☆75Updated 2 years ago
- 中文生成式预训练模型☆98Updated 4 years ago
- Chinese Language Generation Evaluation 中文生成任务基准测评☆247Updated 4 years ago
- ☆75Updated 5 years ago
- Pytorch model for https://github.com/imcaspar/gpt2-ml☆79Updated 3 years ago
- 这是一个用于解决生成在生成任务中(翻译,复述等等),多样性不足问题的模型。☆45Updated 5 years ago
- A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)☆127Updated 2 years ago
- 中文版unilm预训练模型☆83Updated 4 years ago
- Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension☆166Updated 3 years ago
- Byte Cup 2018 International Machine Learning Contest (3rd prize)☆77Updated 2 years ago
- Bert finetune for CMRC2018, CJRC, DRCD, CHID, C3☆183Updated 4 years ago
- 中文预训练模型生成字向量学习,测试BERT,ELMO的中文效果☆99Updated 5 years ago
- DistilBERT for Chinese 海量中文预训练蒸馏bert模型☆92Updated 5 years ago
- TensorFlow code and pre-trained models for BERT and ERNIE☆145Updated 5 years ago
- Modify Chinese text, modified on LaserTagger Model. I name it "文本手术刀".目前,本项目实现了一个文本复述任务,用于NLP语料的数据增强。☆213Updated 2 years ago
- 使用BERT解决lic2019机器阅读理解☆89Updated 5 years ago
- ☆101Updated 4 years ago
- Finetune CPM-1☆74Updated 2 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆25Updated 5 years ago
- 基于transformer的指针生成网络☆92Updated 4 years ago
- QA、CHAT、Task-Oriented简单的demo实现☆26Updated 5 years ago
- Collections of Chinese reading comprehension datasets☆217Updated 5 years ago
- Rank2 solution (no-BERT) for 2019 Language and Intelligence Challenge - DuReader2.0 Machine Reading Comprehension.☆127Updated 5 years ago
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 5 years ago
- 整理一下在keras中使用T5模型的要点☆172Updated 3 years ago