CLUEbenchmark / ELECTRA
中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
☆140Updated 5 years ago
Alternatives and similar repositories for ELECTRA:
Users that are interested in ELECTRA are comparing it to the libraries listed below
- CLUE baseline pytorch CLUE的pytorch版本基线☆74Updated 4 years ago
- Bert finetune for CMRC2018, CJRC, DRCD, CHID, C3☆182Updated 4 years ago
- DIAC2019基于Adversarial Attack的问题等价性判别比赛☆81Updated 5 years ago
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 4 years ago
- DistilBERT for Chinese 海量中文预训练蒸馏bert模型☆91Updated 5 years ago
- 将百度ernie的paddlepaddle模型转成tensorflow模型☆177Updated 5 years ago
- ☆89Updated 4 years ago
- tensorflow version of bert-of-theseus☆62Updated 4 years ago
- 基于BERT的无监督分词和句法分析☆110Updated 4 years ago
- Adversarial Attack文本匹配比赛☆42Updated 5 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆25Updated 5 years ago
- 使用BERT解决lic2019机器阅读理解☆89Updated 5 years ago
- 2019语言与智能技术竞赛-基于知识图谱的主动聊天☆115Updated 5 years ago
- Final Project for EECS496-7☆62Updated 6 years ago
- 基于BERT的中文序列标注☆141Updated 6 years ago
- Byte Cup 2018 International Machine Learning Contest (3rd prize)☆77Updated 2 years ago
- 转换 https://github.com/brightmart/albert_zh 到google格式☆62Updated 4 years ago
- First place solution of WSDM CUP 2020, pairwise-bert, lightgbm☆89Updated 5 years ago
- 中文版unilm预训练模型☆83Updated 4 years ago
- ☆123Updated 6 years ago
- use ELMo in chinese environment☆104Updated 6 years ago
- ☆59Updated 5 years ago
- 中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large☆229Updated 5 years ago
- Subword Encoding in Lattice LSTM for Chinese Word Segmentation☆53Updated 5 years ago
- 2019年百度的实体链指比赛(ccks2019),一个baseline☆113Updated 5 years ago
- Neural word segmentation with rich pretraining, code for ACL 2017 paper☆165Updated 6 years ago
- ☆278Updated 4 years ago
- Python toolkit for Chinese Language Understanding(CLUE) Evaluation benchmark☆129Updated last year
- XLNet: Generalized Autoregressive Pretraining for Language Understanding 论文的中文翻译 Paper Chinese Translation!☆49Updated 5 years ago
- Rank2 solution (no-BERT) for 2019 Language and Intelligence Challenge - DuReader2.0 Machine Reading Comprehension.☆127Updated 5 years ago