thunlp / THULAC.soLinks
An Efficient Lexical Analyzer for Chinese
☆44Updated 6 years ago
Alternatives and similar repositories for THULAC.so
Users that are interested in THULAC.so are comparing it to the libraries listed below
Sorting:
- a chinese segment base on crf☆234Updated 7 years ago
- Chinese Natural Language Processing tools and examples☆162Updated 9 years ago
- Chinese morphological analysis with Word Segment and POS Tagging data for MeCab☆162Updated 8 years ago
- ZPar statistical parser. Universal language support (depending on the availability of training data), with language-specific features for…☆135Updated 9 years ago
- Simple Solution for Multi-Criteria Chinese Word Segmentation☆303Updated 5 years ago
- 基于深度学习的中文分词尝试☆84Updated 10 years ago
- Chinese word segmentation algorithm without corpus(无需语料库的中文分词)☆501Updated 5 years ago
- 对中文分词jieba (python版)的注解☆93Updated 7 years ago
- A Chinese sentiment dataset may be useful for sentiment analysis.☆234Updated 9 years ago
- auto generate chinese words in huge text.☆92Updated 11 years ago
- 把之前 hanLP-python-flask 裡面的 hanLP 單獨分出來☆59Updated 8 years ago
- NLP Education Tools by YuZhen(www.yuzhenkeji.com)☆51Updated 11 years ago
- Clone of "A Good Part-of-Speech Tagger in about 200 Lines of Python" by Matthew Honnibal☆49Updated 9 years ago
- Chinese Tokenizer; New words Finder. 中文三段式机械分词算法; 未登录新词发现算法☆95Updated 9 years ago
- 一个中文的已标注词性的语料库☆208Updated 11 years ago
- 利用深度学习实现中文分词☆63Updated 8 years ago
- 中文相关词典和语料库。☆176Updated 11 years ago
- a text analyzing (match, rewrite, extract) engine (python edition)☆80Updated 8 years ago
- Chinese word segmentation module of LTP☆46Updated 10 years ago
- Train Wikidata with word2vec for word embedding tasks☆123Updated 7 years ago
- My paper, note and anything in text.☆33Updated 6 years ago
- Pure python NLP toolkit☆55Updated 10 years ago
- 中文文本自动纠错☆86Updated 7 years ago
- Source codes and corpora of paper "Iterated Dilated Convolutions for Chinese Word Segmentation"☆133Updated 4 years ago
- 中文分词程序,可以在没有中文语料库的情况下通过相关性将一段文本中的中文词汇抽取出来☆56Updated 12 years ago
- Chinese Words Segment Library based on HMM model☆166Updated 11 years ago
- 基于深度学习的自然语言处理库☆159Updated 7 years ago
- ☆129Updated 8 years ago
- This is a corpus of Chinese abbreviation, including negative full forms.☆199Updated 4 years ago
- Text Classification ToolKit☆23Updated 7 years ago