Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
☆997Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for CLUECorpus2020
Users that are interested in CLUECorpus2020 are comparing it to the libraries listed below
Sorting:
- 高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型☆816Jul 8, 2020Updated 5 years ago
- 大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP☆9,862Feb 6, 2026Updated 3 weeks ago
- 中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard☆4,232Feb 6, 2026Updated 3 weeks ago
- Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo☆3,106May 9, 2024Updated last year
- A Large-scale Chinese Short-Text Conversation Dataset and Chinese pre-training dialog models☆1,936Jun 12, 2023Updated 2 years ago
- 中文自然语言处理数据集,平时做做实验的材料。欢迎补充提交合并。☆4,575Nov 21, 2023Updated 2 years ago
- 搜索所有中文NLP数据集,附常用英文NLP数据集☆4,419Nov 21, 2022Updated 3 years ago
- Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard☆1,787Feb 18, 2023Updated 3 years ago
- MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集。对标chatGPT训练的40T数据。MNBVC数据集不但包括主流文化,也包括 各个小众文化甚至火星文的数据。MNBVC数据集包括新闻、作文、小说、书籍、杂志…☆4,128Jan 31, 2026Updated last month
- [COLING 2022] CSL: A Large-scale Chinese Scientific Literature Dataset 中文科学文献数据集☆659Jun 19, 2023Updated 2 years ago
- A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型☆3,984Nov 21, 2022Updated 3 years ago
- Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)☆10,175Jul 15, 2025Updated 7 months ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,156Jan 22, 2024Updated 2 years ago
- Chinese version of GPT2 training code, using BERT tokenizer.☆7,599Apr 25, 2024Updated last year
- Open Language Pre-trained Model Zoo☆1,005Nov 18, 2021Updated 4 years ago
- Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)☆1,440Jul 15, 2025Updated 7 months ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,281Oct 16, 2024Updated last year
- RoBERTa中文预训练模型: RoBERTa for Chinese☆2,773Jul 22, 2024Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆506Oct 4, 2022Updated 3 years ago
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,696May 8, 2023Updated 2 years ago
- 中文公开聊天语料库☆4,176Apr 23, 2024Updated last year
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,055Apr 14, 2024Updated last year
- 100+ Chinese Word Vectors 上百种预训练中文词向量☆12,183Oct 30, 2023Updated 2 years ago
- 中文图书语料MD5链接☆217Jan 31, 2024Updated 2 years ago
- 搜集、整理、发布 中文 自然语言处理 语料/数据集,与 有志之士 共同 促进 中文 自然语言处理 的 发展。☆6,477Jan 29, 2019Updated 7 years ago
- Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。☆4,148Aug 13, 2024Updated last year
- a bert for retrieval and generation☆860Feb 26, 2021Updated 5 years ago
- Datasets, SOTA results of every fields of Chinese NLP☆1,812Apr 7, 2022Updated 3 years ago
- Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合☆5,528Feb 16, 2026Updated 2 weeks ago
- keras implement of transformers for humans☆5,421Nov 11, 2024Updated last year
- Chinese Language Generation Evaluation 中文生成任务基准测评☆249Dec 9, 2020Updated 5 years ago
- ☆441Apr 25, 2025Updated 10 months ago
- GPT2 for Multiple Languages, including pretrained models. GPT2 多语言支持, 15亿参数中文预训练模型☆1,704May 22, 2023Updated 2 years ago
- GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)☆3,012Oct 30, 2023Updated 2 years ago
- A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset☆711Jun 17, 2024Updated last year
- 一键中文数据增强包 ; NLP数据增强、bert数据增强、EDA:pip install nlpcda☆1,879Mar 18, 2025Updated 11 months ago
- Pre-Trained Chinese XLNet(中文XLNet预训练模型)☆1,650Jul 15, 2025Updated 7 months ago
- pycorrector is a toolkit for text error correction. 文本纠错,实现了Kenlm,T5,MacBERT,ChatGLM3,Qwen2.5 等模型应用在纠错场景,开箱即用。☆6,374Jan 12, 2026Updated last month
- Chinese Pre-Trained Language Models (CPM-LM) Version-I☆1,582Mar 18, 2023Updated 2 years ago