bjascob / LemmInflectLinks
A python module for English lemmatization and inflection.
☆268Updated last year
Alternatives and similar repositories for LemmInflect
Users that are interested in LemmInflect are comparing it to the libraries listed below
Sorting:
- Text tokenization and sentence segmentation (segtok v2)☆205Updated 3 years ago
- A modern, interlingual wordnet interface for Python☆255Updated 3 weeks ago
- A python module for word inflections designed for use with spaCy.☆92Updated 5 years ago
- Implementation of the ClausIE information extraction system for python+spacy☆225Updated 2 years ago
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆317Updated this week
- Text to sentence splitter using heuristic algorithm by Philipp Koehn and Josh Schroeder.☆249Updated 2 years ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆151Updated 2 years ago
- Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further lang…☆197Updated 2 years ago
- spacy-wordnet creates annotations that easily allow the use of wordnet and wordnet domains by using the nltk wordnet interface☆260Updated 11 months ago
- A tokenizer and sentence splitter for German and English web and social media texts.☆147Updated 7 months ago
- Google USE (Universal Sentence Encoder) for spaCy☆184Updated 2 years ago
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 3 years ago
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 2 years ago
- A sentence segmenter that actually works!☆305Updated 4 years ago
- Language independent truecaser in Python.☆160Updated 3 years ago
- Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)☆373Updated last year
- A python true casing utility that restores case information for texts☆89Updated 2 years ago
- Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/☆192Updated last year
- Easier Automatic Sentence Simplification Evaluation☆161Updated last year
- spaCy + UDPipe☆162Updated 3 years ago
- Disambiguate is a tool for training and using state of the art neural WSD models☆60Updated 3 weeks ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆77Updated last year
- LASER multilingual sentence embeddings as a pip package☆224Updated last year
- PYthon Automated Term Extraction☆315Updated 2 years ago
- Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.☆359Updated 2 years ago
- Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Do…☆81Updated last year
- A module to compute textual lexical richness (aka lexical diversity).☆109Updated last year
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆156Updated 2 years ago
- 📃Language Model based sentences scoring library☆309Updated 3 years ago
- Sentence transformers models for SpaCy☆107Updated 2 years ago