stephantul / unitoken
Tokenization across languages. Useful as preprocessing for subword tokenization.
☆22Updated last year
Related projects ⓘ
Alternatives and complementary repositories for unitoken
- An open-source NLP library: fast text cleaning and preprocessing☆23Updated 3 years ago
- Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.☆43Updated 6 months ago
- ☆17Updated last year
- Source code and data for Like a Good Nearest Neighbor☆28Updated 9 months ago
- Documentation effort for the BookCorpus dataset☆33Updated 3 years ago
- Generate BERT vocabularies and pretraining examples from Wikipedias☆18Updated 4 years ago
- ☆22Updated 2 years ago
- FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction☆23Updated 2 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆31Updated 5 months ago
- ☆29Updated 2 years ago
- BERT models for many languages created from Wikipedia texts☆34Updated 4 years ago
- A web interface to understand language-specific BERT-models☆17Updated 7 months ago
- Hyperparameter search for AllenNLP - powered by Ray TUNE☆28Updated 5 years ago
- Combining encoder-based language models☆11Updated 3 years ago
- Keras Implementation of Flair's Contextualized Embeddings☆26Updated 3 years ago
- Implementation of Nested Named Entity Recognition using Flair☆24Updated 3 years ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- Data Programming by Demonstration (DPBD) for Document Classification☆35Updated 3 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆23Updated 7 months ago
- A lightweight but powerful library to build token indices for NLP tasks, compatible with major Deep Learning frameworks like PyTorch and …☆49Updated 4 years ago
- Embedding Recycling for Language models☆38Updated last year
- ☆28Updated last year
- Converter from UD-trees to BART representation☆36Updated 8 months ago
- ☆17Updated last year
- ☆14Updated last month
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆17Updated last month
- The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"☆20Updated 4 years ago
- A library for computing diverse text characteristics and using them to analyze data sets and models with ease.☆40Updated 2 years ago
- ☆42Updated last year