dpc11 / TextSimilarity
基于向量空间模型(VSM)和潜语义索引(LSI)实现的多种文本相似度计算
☆1Updated 7 years ago
Alternatives and similar repositories for TextSimilarity:
Users that are interested in TextSimilarity are comparing it to the libraries listed below
- self complement of baike knowledge base info-box extraction by online analysis.基于互动百科,百度百科,搜狗百科的词条infobox结构化信息抽取,百科知识的融合☆35Updated 6 years ago
- self complemented BaiduIndexSpyder based on Selenium , index image decode and num image transfer,基于关键词的历时百度搜索指数自动采集☆41Updated 6 years ago
- self implement of NLP toolkit 个人实现NLP汉语自然语言处理组件,提供基于HMM与CRF的分词,词性标注,命名实体识别接口,提供基于CRF的依存句法接口。☆55Updated 6 years ago
- self complemented WeiboIndexSpyder based on Selenium ,新浪微博指数(微指数)采集,包括综合指数,移动端指数,PC端指数☆31Updated 6 years ago
- ZhidaoChatbot, a chatbot that can be an expert on the common questions like why,how,when,who,what based on the online question-answer web…☆42Updated 5 years ago
- 练习题︱基于今日头条开源数据的文本挖掘☆84Updated 6 years ago
- Self complemented Key infomation extraction including keywords, abstract from text using algorithm like textrank ,tfidf 基于Textrank算法的文本摘要…☆53Updated 6 years ago
- 依据香港中文大学设计的规则系统,先用小样本评论建立初始关键词库,再结合18种句式逐条匹配评论,能够快速准确地识别评论对象及情感极性。经多次迭代优化关键词库后,达到较高准确率的基础上,使用Tableau进一步分析数据,识别出客户集中关注的商品属性、普遍好评差评的商品属性;通过…☆53Updated 7 years ago
- 新词发现,信息熵,左右互信息☆16Updated 6 years ago
- self summary after attending CCL2018 (全国计算语言学学术会议), CCL2018参会总结,包括会议论文下载脚本,会议前言技术报告下载,以及个人的一点总结.☆27Updated 6 years ago
- 使用LDA/Apriori/k-means/word2vec模型对节目弹幕短文本进行文本挖掘,输出相应统计结果/图片☆21Updated 7 years ago
- Self complemented text feature extraction using algorithms including CHI, DF, IG, MI for the experiment of text classification based on s…☆49Updated 6 years ago
- NLP 以及相关的学习实践☆40Updated 2 years ago
- using jieba and doc2vec to implement sentiment analysis for Chinese docs☆80Updated 6 years ago
- baike schema crawler for baidu baike , hudongbaike. 面向百度百科与互动百科的概念分类体系抓取脚本☆36Updated 6 years ago
- Code lab for NLP. Including doc2txt,tf-idf,cnn,text classify,hmm cws,crf ner.☆42Updated 6 years ago
- 基于python gensim 库的LDA算法 对中文进行文本分析,很难得,网上都是英文的,基本上没有中文的,需要安装jieba分词进行分词,然后去除停用词最后才能使用LDA☆134Updated 5 years ago