甲言,专注于古代汉语(古汉语/古文/文言文/文言)处理的NLP工具包,支持文言词库构建、分词、词性标注、断句和标点。Jiayan, the 1st NLP toolkit designed for Classical Chinese, supports lexicon construction, tokenizing, POS tagging, sentence segmentation and punctuation.
☆653Nov 2, 2021Updated 4 years ago
Alternatives and similar repositories for Jiayan
Users that are interested in Jiayan are comparing it to the libraries listed below
Sorting:
- GuwenBERT: 古文预训练语言模型(古文BERT) A Pre-trained Language Model for Classical Chinese (Literary Chinese)☆554Aug 31, 2021Updated 4 years ago
- GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Ch…☆194Dec 11, 2023Updated 2 years ago
- 非常全的文言文(古文)-现代文平行语料☆1,413Apr 21, 2024Updated last year
- 古文语言理解测评基准 Classical Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard☆56Aug 23, 2023Updated 2 years ago
- Tokenizer POS-tagger and Dependency-parser for Classical Chinese☆69Feb 23, 2026Updated last week
- Ancient Chinese Corpus with Word Sense Annotation☆62May 29, 2024Updated last year
- Tokenizer POS-tagger and Dependency-parser for Classical Chinese☆19Feb 28, 2026Updated last week
- SikuBERT:四库全书的预训练语言模型(四库BERT) Pre-training Model of Siku Quanshu☆153Jul 30, 2023Updated 2 years ago
- 古文现代文翻译平行语料库☆114Jan 12, 2022Updated 4 years ago
- An open-source classical Chinese information processing toolkit developed by Tsinghua Natural Language Processing Group☆52Dec 13, 2018Updated 7 years ago
- AnchiBERT: A Pre-Trained Model for Ancient Chinese Language Understanding and Generation(古文预训练模型)☆71Jul 16, 2021Updated 4 years ago
- 汉语古典文本资料库☆323Feb 3, 2018Updated 8 years ago
- BERT-CCPoem is an BERT-based pre-trained model particularly for Chinese classical poetry☆161Mar 1, 2022Updated 4 years ago
- 殆知阁古代文献☆1,475May 13, 2024Updated last year
- ☆40Feb 20, 2023Updated 3 years ago
- CCL 2023 古汉语通假字语料库的构建及应用研究:通假字资源库☆21Sep 23, 2023Updated 2 years ago
- Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA)☆35Updated this week
- 一个面向繁体中文古籍分词的python工具包☆36Jan 3, 2022Updated 4 years ago
- Evaluation of Natural Language Processing (NLP) tools for the Ancient Chinese language☆44Feb 26, 2026Updated last week
- 近代汉语语料库数据集 自然语言处理 语料库 古代汉语 古汉语 文言文 数字人文 计算语言☆168Mar 4, 2025Updated last year
- Raw text of 申報☆27Jan 17, 2022Updated 4 years ago
- A Benchmark for Classical Chinese Based on a Crowdsourcing System.☆59May 25, 2021Updated 4 years ago
- Poetry-related datasets developed by THUAIPoet (Jiuge) group.☆235Apr 3, 2020Updated 5 years ago
- Tokenizer POS-tagger and Dependency-parser for Classical Chinese☆15Dec 30, 2025Updated 2 months ago
- An evaluation bentchmark for classical Chinese☆18Dec 13, 2023Updated 2 years ago
- 渊 - A project for Classical Chinese☆110Feb 23, 2022Updated 4 years ago
- 非常全的古诗词数据,收录了从先秦到现代的共计85万余首古诗词。☆1,718Aug 8, 2023Updated 2 years ago
- ☆415Jul 20, 2025Updated 7 months ago
- 漢語拆字字典☆810Jan 8, 2023Updated 3 years ago
- A paper list of automatic poetry generation, analysis, translation, etc.☆186May 29, 2021Updated 4 years ago
- 文言文命名实体识别,基于BILSTM+CRF完成文言文的命名实体实体,识别实体包括人物、地点、机构、时间等。☆10Jan 19, 2021Updated 5 years ago
- Chinese Classic Poem Mining Project including corpus buiding by spyder and content analysis by nlp methods, 基于爬虫与nlp的中国古代诗词文本挖掘项目☆119Oct 7, 2018Updated 7 years ago
- 本仓库是基于bert4keras实现的古文-现代文翻译模型。具体使用了基于掩码自注意力机制的UNILM(Li al., 2019)预训练模型作为翻译系统的backbone。我们首先使用了普通的中文(现代文)BERT、Roberta权重作为UNILM的初始权重以训练UNILM…☆53May 3, 2022Updated 3 years ago
- 比较全的中华古诗古词古文库,包括21万首古诗词,以及注释、赏析等信息,包含10000多名诗人以及诗人的介绍、生平等,同时包含,1600多个词牌介绍,中国70多个朝代解析,和古诗文的近200个分类标签☆400Sep 11, 2023Updated 2 years ago
- This is a pre-trained LSTM model. This model can help you to segment unpunctuated historical Chinese texts. 這是基於 LSTM 的預訓練模型。此模型可幫助您為漢語古文…☆28Nov 19, 2021Updated 4 years ago
- Official github repo for ACLUE, an evaluation benchmark focused on ancient Chinese language comprehension☆33Mar 20, 2024Updated last year
- 中文古诗词语料库☆27Sep 1, 2016Updated 9 years ago
- ☆19Oct 6, 2023Updated 2 years ago
- ☆21Apr 30, 2023Updated 2 years ago