althayr / Document-Layout-Parser
Parses a document (scanned or phone captured) and returns the underlying question - answer layout structured capture by LayoutXLM model
☆10Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for Document-Layout-Parser
- ☆52Updated 3 years ago
- 分享一些S2S在实际应用中遇到的问题和解决方法。☆27Updated 4 years ago
- 基于Roformer的文本相似度☆12Updated 3 years ago
- We start a company-name recognition task with a small scale and low quality training data, then using skills to enhanced model training s…☆80Updated 4 years ago
- ☆10Updated 2 years ago
- ☆32Updated 3 years ago
- source code of EMNLP2021: A Lightweight Pretrained Model for Chinese Spelling Check☆13Updated 3 years ago
- DescriptionPairsExtraction, entity and it's description pairs extract program based on Albert and data back-annotation. 基于Albert与结构化数据回标思…☆20Updated 2 years ago
- SlotRefine: A Fast Non-Autoregressive Model forJoint Intent Detection and Slot Filling☆48Updated 3 years ago
- schemakg, a knowledge graph for schema that seeks to cover a range of things as much as possible including entity schema and event schema…☆28Updated 3 years ago
- 适用于常见的NLP任务的模板☆34Updated last year
- 基于“Seq2Seq+前缀树”的知识图谱问答☆71Updated 2 years ago
- 精简版NEZHA模型权重☆21Updated 3 years ago
- lightweighted deep learning inference service framework☆38Updated 3 years ago
- 针对NER领域提供从线下训练到线上部署的一整套闭环流程☆12Updated 3 years ago
- R-Drop方法在中文任 务上的简单实验☆90Updated 2 years ago
- Minimal example of using a traced huggingface transformers model with libtorch☆35Updated 4 years ago
- 本项目采用BERT等预训练模型实现多项选择型阅读理解任务(Multiple Choice MRC)☆15Updated 3 years ago
- 2021搜狐校园文本匹配算法大赛 分比我们低的都是帅哥队☆42Updated 3 years ago
- ☆22Updated 3 years ago
- ☆13Updated 2 years ago
- Just Code ! 针对面试训练算法题, 目前包括字节跳动面试题、 LeetCode 和剑指 offer ,持续扩容中☆18Updated 4 years ago
- Seq2seqAttGeneration, an basic implementation of text generation that using seq2seq attention model to generate poem series. this project…☆17Updated 3 years ago
- 基于回译增强数据,目前整合了百度、有道、谷歌(需翻墙)翻译。☆20Updated 4 years ago
- 基于BERT的预训练语言模型实现,分为两步:预训练和微调。目前已包括BERT、Roberta、ALbert三个模型,且皆可支持Whole Word Mask模式。☆16Updated 4 years ago
- Using FasterTransformer for accelerating the predict speed of bert and roberta☆13Updated 5 years ago
- 天池人工智能创新赛3-ch12hu团队周星星分享☆26Updated 3 years ago
- ☆46Updated 3 years ago
- pytorch版基于gpt+nezha的中文多轮Cdial☆11Updated 2 years ago
- GLM (General Language Model)☆24Updated 2 years ago