mhagiwara / github-typo-corpusLinks
GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors
☆510Updated 5 years ago
Alternatives and similar repositories for github-typo-corpus
Users that are interested in github-typo-corpus are comparing it to the libraries listed below
Sorting:
- Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.☆361Updated 2 years ago
- 💥 Use the latest Stanza (StanfordNLP) research models directly in spaCy☆738Updated last year
- xfspell — the Transformer Spell Checker☆190Updated 5 years ago
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆317Updated last month
- 📃Language Model based sentences scoring library☆309Updated 3 years ago
- 📰Natural language processing (NLP) newsletter☆302Updated 5 years ago
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 2 years ago
- LASER multilingual sentence embeddings as a pip package☆224Updated 2 years ago
- A sentence segmenter that actually works!☆305Updated 5 years ago
- spaCy + UDPipe☆163Updated 3 years ago
- UDPipe: Trainable pipeline for tokenizing, tagging, lemmatizing and parsing Universal Treebanks and other CoNLL-U files☆386Updated last month
- Language independent truecaser in Python.☆160Updated 3 years ago
- Misspelling Oblivious Word Embeddings☆201Updated 6 years ago
- This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, an…☆560Updated 3 years ago
- ERRor ANnotation Toolkit: Automatically extract and classify grammatical errors in parallel original and corrected sentences.☆451Updated last year
- Text tokenization and sentence segmentation (segtok v2)☆205Updated 3 years ago
- Easy-to-use word-to-word translations for 3,564 language pairs.☆366Updated 4 years ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆151Updated 2 years ago
- ✔️Contextual word checker for better suggestions (not actively maintained)☆417Updated 7 months ago
- Create interactive textual heat maps for Jupiter notebooks☆196Updated last year
- A python module for English lemmatization and inflection.☆270Updated last year
- Preprocessing Library for Natural Language Processing☆164Updated 2 years ago
- The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning" https://arxiv.org/abs/1909.04761☆282Updated 5 years ago
- PYthon Automated Term Extraction☆315Updated 2 years ago
- Unsupervised Language Model Pre-training for French☆248Updated 2 years ago
- Natural Language Processing Pipeline - Sentence Splitting, Tokenization, Lemmatization, Part-of-speech Tagging and Dependency Parsing☆560Updated 9 months ago
- Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/☆192Updated last year
- Simple State-of-the-Art BERT-Based Sentence Classification with Keras / TensorFlow 2. Built with HuggingFace's Transformers.☆201Updated last year
- Deep learning with text doesn't have to be scary.☆275Updated 2 years ago
- Scripts and links to recreate the ELI5 dataset.☆326Updated 4 years ago