t-systems-on-site-services-gmbh / german-wikipedia-text-corpusLinks
This is a german text corpus from Wikipedia. It is cleaned, preprocessed and sentence splitted. It's purpose is to train NLP embeddings like fastText or ELMo Deep contextualized word representations.
☆24Updated 3 years ago
Alternatives and similar repositories for german-wikipedia-text-corpus
Users that are interested in german-wikipedia-text-corpus are comparing it to the libraries listed below
Sorting:
- A tokenizer and sentence splitter for German and English web and social media texts.☆147Updated 9 months ago
- OpusFilter - Parallel corpus processing toolkit☆109Updated last month
- A minimal, pure Python library to interface with CoNLL-U format files.☆151Updated 2 weeks ago
- BERT and ELECTRA models trained on Europeana Newspapers☆38Updated 3 years ago
- ☆75Updated last month
- UIMA CAS processing library written in Python☆90Updated 3 months ago
- Dutch coreference resolution & dialogue analysis using deterministic rules☆22Updated 2 years ago
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆156Updated 2 years ago
- Bicleaner is a parallel corpus classifier/cleaner that aims at detecting noisy sentence pairs in a parallel corpus.☆158Updated last year
- Transformer based translation quality estimation☆113Updated 2 years ago
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆317Updated last month
- coFR: COreference resolution tool for FRench (and singletons).☆25Updated 5 years ago
- ☆49Updated last year
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 2 years ago
- Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)☆376Updated last year
- Alignment and annotation for comparable documents.☆22Updated 6 years ago
- LASER multilingual sentence embeddings as a pip package☆224Updated 2 years ago
- Easier Automatic Sentence Simplification Evaluation☆161Updated last year
- Ten Thousand German News Articles Dataset for Topic Classification☆86Updated 2 years ago
- Text to sentence splitter using heuristic algorithm by Philipp Koehn and Josh Schroeder.☆254Updated 2 years ago
- Linguistic and stylistic complexity measures for (literary) texts☆84Updated last year
- 🖋 Resource and Tool for Writing System Identification -- LREC 2024☆19Updated last year
- A Dataset of German Legal Documents for Named Entity Recognition☆173Updated 2 years ago
- Code to reproduce the experiments from the paper.☆101Updated last year
- Repository with code for MaChAmp: https://aclanthology.org/2021.eacl-demos.22/☆88Updated 4 months ago
- ☆64Updated 2 years ago
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 4 years ago
- Automatic extraction of edited sentences from text edition histories.☆83Updated 3 years ago
- Efficient Low-Memory Aligner☆146Updated 8 months ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆79Updated 2 years ago