bjascob / LemmInflectLinks
A python module for English lemmatization and inflection.
☆274Updated 2 years ago
Alternatives and similar repositories for LemmInflect
Users that are interested in LemmInflect are comparing it to the libraries listed below
Sorting:
- Text tokenization and sentence segmentation (segtok v2)☆207Updated 3 years ago
- A modern, interlingual wordnet interface for Python☆274Updated last week
- spacy-wordnet creates annotations that easily allow the use of wordnet and wordnet domains by using the nltk wordnet interface☆261Updated 3 months ago
- Implementation of the ClausIE information extraction system for python+spacy☆226Updated 3 years ago
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 4 years ago
- A tokenizer and sentence splitter for German and English web and social media texts.☆148Updated 11 months ago
- A python module for word inflections designed for use with spaCy.☆93Updated 5 years ago
- Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further lang…☆197Updated 2 years ago
- Text to sentence splitter using heuristic algorithm by Philipp Koehn and Josh Schroeder.☆255Updated 3 years ago
- A minimal, pure Python library to interface with CoNLL-U format files.☆152Updated last week
- A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.☆319Updated 4 months ago
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆156Updated 2 years ago
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆223Updated 2 years ago
- Google USE (Universal Sentence Encoder) for spaCy☆184Updated 2 years ago
- PYthon Automated Term Extraction☆317Updated 2 years ago
- A sentence segmenter that actually works!☆304Updated 5 years ago
- spaCy + UDPipe☆163Updated 3 years ago
- Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/☆193Updated 2 years ago
- Easier Automatic Sentence Simplification Evaluation☆162Updated 2 years ago
- LASER multilingual sentence embeddings as a pip package☆225Updated 2 years ago
- Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)☆384Updated 2 years ago
- Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Do…☆81Updated last year
- Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.☆367Updated 2 years ago
- This is a simple Python package for calculating a variety of lexical diversity indices☆81Updated 2 years ago
- ✔️Contextual word checker for better suggestions (not actively maintained)☆418Updated 10 months ago
- A python true casing utility that restores case information for texts☆89Updated 3 years ago
- Language independent truecaser in Python.☆160Updated 4 years ago
- Enhanced Subject Word Object Extraction☆153Updated 8 months ago
- Disambiguate is a tool for training and using state of the art neural WSD models☆60Updated 4 months ago
- Segment documents into coherent parts using word embeddings.☆149Updated 3 years ago