clue-ai / PromptCLUELinks
PromptCLUE, 全中文任务支持零样本学习模型
☆663Updated 2 years ago
Alternatives and similar repositories for PromptCLUE
Users that are interested in PromptCLUE are comparing it to the libraries listed below
Sorting:
- pCLUE: 1000000+多任务提示学习数据集☆495Updated 2 years ago
- Mengzi Pretrained Models☆535Updated 2 years ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet and so on. 文本生成模型,实现了包括LLaMA,ChatGLM,BLO…☆964Updated 9 months ago
- [COLING 2022] CSL: A Large-scale Chinese Scientific Literature Dataset 中文科学文献数据集☆635Updated 2 years ago
- Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料☆968Updated 2 years ago
- SimBERT升级版(SimBERTv2)!☆444Updated 3 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated 2 years ago
- Collections of resources from Joint Laboratory of HIT and iFLYTEK Research (HFL)☆370Updated 2 years ago
- MiniRBT (中文小型预训练模型系列)☆281Updated 2 years ago
- 语言模型中文认知能力分析☆236Updated last year
- alpaca中文指令微调数据集☆392Updated 2 years ago
- a bert for retrieval and generation☆860Updated 4 years ago
- Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)☆672Updated 2 years ago
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆399Updated last year
- Implementation of Chinese ChatGPT☆286Updated last year
- 3000000+语义理解与匹配数据集。可用于无监督对比学习、半监督学习等构建中文领域效果最好的预训练模型☆298Updated 2 years ago
- clueai工具包: 3行代码3分钟,自定义需要的API!☆233Updated 2 years ago
- CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation☆489Updated 2 years ago
- A framework for cleaning Chinese dialog data☆271Updated 4 years ago
- 中文自然语言推理数据集(A large-scale Chinese Nature language inference and Semantic similarity calculation Dataset)☆430Updated 5 years ago
- 中文自然语言推理与语义相似度数据集☆357Updated 3 years ago
- unified embedding model☆863Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,179Updated last month
- 高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型☆818Updated 4 years ago
- ☆308Updated 2 years ago
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆267Updated last year
- 中文生成式预训练模型☆567Updated 3 years ago
- PERT: Pre-training BERT with Permuted Language Model☆361Updated 2 years ago
- 比Sentence-BERT更有效的句向量方案☆373Updated 2 years ago