GanjinZero / GTS
Code for Unsupervised multi-granular Chinese word segmentation and term discovery via graph partition [JBI]
☆15Updated 2 years ago
Related projects: ⓘ
- DescriptionPairsExtraction, entity and it's description pairs extract program based on Albert and data back-annotation. 基于Albert与结构化数据回标思…☆20Updated 2 years ago
- An Industry Evaluation of Embedding-based Entity Alignment @ COLING'20☆24Updated 2 years ago
- ☆28Updated last year
- Efficient-GlobalPointer的关系抽取任务☆22Updated 2 years ago
- 基于span分类和负采样的嵌套实体识别☆14Updated last year
- ☆24Updated last year
- The source code of paper "An Effective System for Multi-format Information Extraction".☆18Updated 3 years ago
- CHIP2021医学对话临床发现阴阳性判别任务冠军方案☆18Updated 2 years ago
- Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP 2021)☆40Updated last month
- 复习论文《A Frustratingly Easy Approach for Joint Entity and Relation Extraction》☆29Updated 3 years ago
- Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation☆22Updated 4 years ago
- Source code for paper "LET: Linguistic Knowledge Enhanced Graph Transformer for Chinese Short Text Matching", AAAI2021.☆48Updated 3 years ago
- EMNLP2020 findings paper: Minimize Exposure Bias of Seq2Seq Models in Joint Entity and Relation Extraction☆50Updated last year
- pytorch版基于gpt+nezha的中文多轮Cdial☆11Updated last year
- Code & Data for our Paper "PATTERN-BASED CHINESE HYPERNYM-HYPONYM RELATION EXTRACTION METHOD"☆12Updated 4 years ago
- ☆10Updated 3 years ago
- The data and codes for the baseline setting of medical chatbox.☆45Updated 3 years ago
- CCL2022 新闻脉络关系识别☆29Updated last year
- Implementation of AAAI 21 paper: Nested Named Entity Recognition with Partially Observed TreeCRFs☆52Updated 3 years ago
- Baselines for CCKS 2022 Task "Commonsense Knowledge Salience Evaluation"☆31Updated last year
- ☆16Updated 10 months ago
- Apply the Circular to the Pretraining Model☆38Updated 2 years ago
- 基于回译增强数据,目前整合了百度、有道、谷歌(需翻墙)翻译。☆20Updated 3 years ago
- ☆29Updated 5 years ago
- 基于BERT的预训练语言模型实现,分为两步:预训练和微调。目前已包括BERT、Roberta、ALbert三个模型,且皆可支持Whole Word Mask模式。☆15Updated 4 years ago
- 基于“Seq2Seq+前缀树”的知识图谱问答☆71Updated 2 years ago
- ☆21Updated 5 years ago
- [EMNLP 2022 Findings] Towards Realistic Low-resource Relation Extraction: A Benchmark with Empirical Baseline Study☆33Updated 6 months ago
- AAAI'22-"CODE: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking."☆12Updated 3 years ago
- ☆21Updated 2 years ago