thunlp / THULAC.so
An Efficient Lexical Analyzer for Chinese
☆38Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for THULAC.so
- Chinese Natural Language Processing tools and examples☆163Updated 8 years ago
- a chinese segment base on crf☆234Updated 5 years ago
- 把之前 hanLP-python-flask 裡面的 hanLP 單獨分出來☆60Updated 6 years ago
- Train Wikidata with word2vec for word embedding tasks☆123Updated 6 years ago
- Chinese Tokenizer; New words Finder. 中文三段式机械分词算法; 未登录新词发现算法☆95Updated 8 years ago
- NLP Education Tools by YuZhen(www.yuzhenkeji.com)☆50Updated 9 years ago
- THU Chinese Keyphrase Extraction Toolkit☆124Updated 6 years ago
- auto generate chinese words in huge text.☆92Updated 9 years ago
- Pure python NLP toolkit☆55Updated 8 years ago
- 基于深度学习的中文分词尝试☆85Updated 9 years ago
- 中文短文句相似读☆135Updated 6 years ago
- 基于深度学习的自然语言处理库☆152Updated 6 years ago
- ZPar statistical parser. Universal language support (depending on the availability of training data), with language-specific features for…☆134Updated 8 years ago
- 中文相关词典和语料库。☆168Updated 10 years ago
- Simple Solution for Multi-Criteria Chinese Word Segmentation☆300Updated 4 years ago
- Details of paper cw2vec☆83Updated 6 years ago
- CNN for Chinese Text Classification in Tensorflow☆235Updated 6 years ago
- Source codes and corpora of paper "Iterated Dilated Convolutions for Chinese Word Segmentation"☆136Updated 3 years ago
- 对中文分词jieba (python版)的注解☆89Updated 6 years ago
- FastText 中文文档☆62Updated 3 years ago
- Clone of "A Good Part-of-Speech Tagger in about 200 Lines of Python" by Matthew Honnibal☆49Updated 8 years ago
- Chinese word segmentation algorithm without corpus(无需语料库的中文分词)☆499Updated 4 years ago
- My paper, note and anything in text.☆34Updated 5 years ago
- 一个中文的已标注词性的语料库☆198Updated 10 years ago
- 《知网》中文词语语义相似度算法☆41Updated 11 years ago
- A Public Corpus for Machine Learning☆44Updated 6 years ago
- 中文文本自动纠错☆80Updated 6 years ago
- yaha☆267Updated 6 years ago
- 中文分词程序,可以在没有中文语料库的情况下通过相关性将一段文本中的中文词汇抽取出来☆52Updated 11 years ago