bjascob / LemmInflectLinks
A python module for English lemmatization and inflection.
☆270Updated 2 years ago
Alternatives and similar repositories for LemmInflect
Users that are interested in LemmInflect are comparing it to the libraries listed below
Sorting:
- Text tokenization and sentence segmentation (segtok v2)☆206Updated 3 years ago
- Implementation of the ClausIE information extraction system for python+spacy☆224Updated 3 years ago
- A python module for word inflections designed for use with spaCy.☆93Updated 5 years ago
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆317Updated 2 months ago
- Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further lang…☆197Updated 2 years ago
- Google USE (Universal Sentence Encoder) for spaCy☆184Updated 2 years ago
- spacy-wordnet creates annotations that easily allow the use of wordnet and wordnet domains by using the nltk wordnet interface☆260Updated last month
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 4 years ago
- Text to sentence splitter using heuristic algorithm by Philipp Koehn and Josh Schroeder.☆254Updated 2 years ago
- A modern, interlingual wordnet interface for Python☆260Updated 3 weeks ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆152Updated this week
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 2 years ago
- A tokenizer and sentence splitter for German and English web and social media texts.☆147Updated 9 months ago
- spaCy + UDPipe☆163Updated 3 years ago
- LASER multilingual sentence embeddings as a pip package☆224Updated 2 years ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆79Updated 2 years ago
- Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/☆193Updated 2 years ago
- PYthon Automated Term Extraction☆316Updated 2 years ago
- Easier Automatic Sentence Simplification Evaluation☆161Updated 2 years ago
- Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)☆376Updated last year
- A sentence segmenter that actually works!☆305Updated 5 years ago
- Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Do…☆82Updated last year
- LexRank algorithm for text summarization☆230Updated last year
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆155Updated 2 years ago
- Language independent truecaser in Python.☆160Updated 3 years ago
- ✔️Contextual word checker for better suggestions (not actively maintained)☆417Updated 8 months ago
- Enhanced Subject Word Object Extraction☆152Updated 6 months ago
- Python port of Moses tokenizer, truecaser and normalizer☆495Updated last year
- Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.☆363Updated 2 years ago
- A module to compute textual lexical richness (aka lexical diversity).☆110Updated 2 years ago