t-systems-on-site-services-gmbh / german-wikipedia-text-corpusLinks
This is a german text corpus from Wikipedia. It is cleaned, preprocessed and sentence splitted. It's purpose is to train NLP embeddings like fastText or ELMo Deep contextualized word representations.
☆23Updated 3 years ago
Alternatives and similar repositories for german-wikipedia-text-corpus
Users that are interested in german-wikipedia-text-corpus are comparing it to the libraries listed below
Sorting:
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆319Updated 3 weeks ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆153Updated 3 weeks ago
- An initiative to collect and distribute resources for co-reference resolution in a unified standard.☆25Updated last year
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 3 years ago
- Plan and train German transformer models.☆23Updated 4 years ago
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆156Updated 3 years ago
- Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)☆385Updated 2 years ago
- Identifying Historical People, Places and other Entities: Shared Task on Named Entity Recognition and Linking on Historical Newspapers at…☆21Updated last year
- A tokenizer and sentence splitter for German and English web and social media texts.☆150Updated last year
- ☆64Updated 2 years ago
- Linguistic and stylistic complexity measures for (literary) texts☆84Updated last year
- A Dataset of German Legal Documents for Named Entity Recognition☆172Updated 3 years ago
- Wikipedia text corpus for self-supervised NLP model training☆46Updated 3 years ago
- Compiled tools, datasets, and other resources for historical text normalization.☆20Updated 6 years ago
- Alignment and annotation for comparable documents.☆22Updated 7 years ago
- Bicleaner is a parallel corpus classifier/cleaner that aims at detecting noisy sentence pairs in a parallel corpus.☆159Updated last year
- BERT and ELECTRA models trained on Europeana Newspapers☆38Updated 4 years ago
- ☆50Updated last year
- coFR: COreference resolution tool for FRench (and singletons).☆26Updated 5 years ago
- UIMA CAS processing library written in Python☆90Updated last month
- Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13☆198Updated 3 months ago
- Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.☆368Updated 3 years ago
- OpusFilter - Parallel corpus processing toolkit☆113Updated last week
- Text tokenization and sentence segmentation (segtok v2)☆208Updated 3 years ago
- Repository for the Georgetown University Multilayer Corpus (GUM)☆103Updated last month
- Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further lang…☆197Updated 3 years ago
- An easy-to-use library to extract indices from texts.☆29Updated 4 years ago
- Shared BERT model for 4 languages of Bulgarian, Czech, Polish and Russian. Slavic NER model.☆78Updated 3 years ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆82Updated 2 years ago
- Ten Thousand German News Articles Dataset for Topic Classification☆86Updated 3 years ago