t-systems-on-site-services-gmbh / german-wikipedia-text-corpusLinks
This is a german text corpus from Wikipedia. It is cleaned, preprocessed and sentence splitted. It's purpose is to train NLP embeddings like fastText or ELMo Deep contextualized word representations.
☆24Updated 3 years ago
Alternatives and similar repositories for german-wikipedia-text-corpus
Users that are interested in german-wikipedia-text-corpus are comparing it to the libraries listed below
Sorting:
- coFR: COreference resolution tool for FRench (and singletons).☆24Updated 5 years ago
- Compiled tools, datasets, and other resources for historical text normalization.☆18Updated 6 years ago
- Wikipedia text corpus for self-supervised NLP model training☆44Updated 2 years ago
- BERT and ELECTRA models trained on Europeana Newspapers☆38Updated 3 years ago
- Alignment and annotation for comparable documents.☆22Updated 6 years ago
- OpusFilter - Parallel corpus processing toolkit☆104Updated this week
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 3 years ago
- ☆64Updated 2 years ago
- Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Do…☆80Updated 11 months ago
- Poetry Corpora Annotated on Aesthetic Emotions☆11Updated 2 years ago
- Data for the HIPE 2022 shared task.☆18Updated last year
- 🖋 Resource and Tool for Writing System Identification -- LREC 2024☆16Updated last year
- ☆48Updated 11 months ago
- Plan and train German transformer models.☆23Updated 4 years ago
- Identifying Historical People, Places and other Entities: Shared Task on Named Entity Recognition and Linking on Historical Newspapers at…☆22Updated 10 months ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆151Updated 2 years ago
- SIGTYP 2024 Shared Task on Word Embedding Evaluation for Ancient and Historical Languages☆9Updated last year
- An initiative to collect and distribute resources for co-reference resolution in a unified standard.☆25Updated last year
- A python module for evaluating NERC and NEL system performances as defined in the HIPE shared tasks (formerly CLEF-HIPE-2020-scorer).☆14Updated last year
- Datasets for the Monolingual Word Sense Alignment (MWSA) task☆12Updated 4 years ago
- Dutch coreference resolution & dialogue analysis using deterministic rules☆21Updated 2 years ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆77Updated last year
- Linguistic and stylistic complexity measures for (literary) texts☆81Updated last year
- UIMA CAS processing library written in Python☆90Updated last week
- This is a german ELMo deep contextualized word representation. It is trained on a special German Wikipedia Text Corpus.☆28Updated 5 years ago
- Distribution of word meanings in Wikipedia for English, Italian, French, German and Spanish.☆10Updated 4 years ago
- Repository for the Georgetown University Multilayer Corpus (GUM)☆97Updated 2 weeks ago
- A survey of corpora for Germanic low-resource languages and dialects☆25Updated 6 months ago
- Deutsches Lyrik Korpus (DLK) / German Poetry Corpus☆18Updated last year
- X-SRL Dataset. Including the code for the SRL annotation projection tool and an out-of-the-box word alignment tool based on Multilingual …☆15Updated 4 years ago