taoyafan / abstractive_summarizationLinks
Byte Cup 2018国际机器学习竞赛 23 名(水滴队)代码
☆46Updated 6 years ago
Alternatives and similar repositories for abstractive_summarization
Users that are interested in abstractive_summarization are comparing it to the libraries listed below
Sorting:
- Byte Cup 2018 International Machine Learning Contest (3rd prize)☆77Updated 2 years ago
- 关于文本分类的许多方法,主要涉及到TextCNN,TextRNN, LEAM, Transformer,Attention, fasttext, HAN等☆75Updated 6 years ago
- 2019达观杯 第六名代码☆43Updated 2 years ago
- Bert中文文本分类☆40Updated 6 years ago
- 2019语言与智能技术竞赛-基于知识图谱的主动聊天☆115Updated 6 years ago
- use ELMo in chinese environment☆104Updated 6 years ago
- 汽车主题情感分析大赛冠军☆27Updated 6 years ago
- 面向金融领域的事件主体抽取(ccks2019),一个baseline☆119Updated 6 years ago
- Adversarial Attack文本匹配比赛☆42Updated 5 years ago
- 2019 CAIL 法研杯机器阅读理解挑战赛 第8名 解决方案☆16Updated 5 years ago
- 搜狐校园算法大赛baseline☆66Updated 6 years ago
- textsum基于tensorflow实现的Seq2Seq-attention模型以及其他策略算法, 来解决摘要生成、主旨提取等(Text Summary)的任务。部分代码是在其他作者代码的基础上修改而来,后期将全部整理重构。☆30Updated 5 years ago
- baseline for ccks2019-ipre☆48Updated 5 years ago
- datagrand 2019 information extraction competition rank9☆130Updated 5 years ago
- 基于BERT的中文序列标注☆141Updated 6 years ago
- 2019百度语言与智能技术竞赛信息抽取赛代5名代码☆69Updated 5 years ago
- 蚂蚁金服比赛 15th/2632☆47Updated 6 years ago
- seq2seq+attention model for Chinese textsum☆41Updated 7 years ago
- 基于BiLSTM和Self-Attention的文本分类、表示学习网络☆29Updated 6 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆25Updated 5 years ago
- NLP Predtrained Embeddings, Models and Datasets Collections(NLP_PEMDC). The collection will keep updating.☆64Updated 5 years ago
- Bert-classification and bert-dssm implementation with keras.☆93Updated 4 years ago
- 相似案例匹配☆46Updated 5 years ago
- Our experience & lesson & code☆48Updated 8 years ago
- ☆60Updated 5 years ago
- ☆31Updated 6 years ago
- 基于TensorFlow,seq2seq+attention+beamsearch的文本摘要。☆60Updated 6 years ago
- 关键词抽取,神策杯2018高校算法大师赛比赛,solo 排名3/591☆65Updated 6 years ago
- A Chinese word segment model based on BERT, F1-Score 97%☆93Updated 6 years ago
- 2018年机器阅读理解技术竞赛总结,国内外1000多支队伍中BLEU-4评分排名第6, ROUGE-L评分排名第14。(未ensemble,未嵌入训练好的词向量,无dropout)☆30Updated 6 years ago