gerulata / slovakbertLinks
☆21Updated 2 years ago
Alternatives and similar repositories for slovakbert
Users that are interested in slovakbert are comparing it to the libraries listed below
Sorting:
- A curated list of resources such as tools and datasets useful for the processing of Slovak language☆22Updated 4 months ago
- Interesting links to Slovak NLP tools, utils corpuses and resources.☆17Updated 3 years ago
- Polish BERT☆72Updated 5 years ago
- RoBERTa models for Polish☆88Updated 3 years ago
- Some notebooks for NLP☆207Updated 2 years ago
- ☆50Updated 3 years ago
- ☆524Updated last year
- This repo is the home of Romanian Transformers.☆108Updated 3 years ago
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆541Updated 2 months ago
- TensorFlow code and pre-trained models for BERT☆18Updated 5 years ago
- The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning" https://arxiv.org/abs/1909.04761☆282Updated 5 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 5 years ago
- Basic implementation of BERT and Transformer in Pytorch in one short python file (also includes "predict next word" GPT task)☆45Updated last year
- BERTje is a Dutch pre-trained BERT model developed at the University of Groningen. (EMNLP Findings 2020) "What’s so special about BERT’s …☆140Updated 2 years ago
- Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François…☆181Updated 4 years ago
- Colab TPU - Compatible XLNet model code☆12Updated 5 years ago
- How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.☆135Updated 3 years ago
- A french sequence to sequence pretrained model☆62Updated 3 years ago
- BERT model trained from scratch on Finnish☆95Updated 4 years ago
- fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier☆32Updated 3 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆15Updated 6 years ago
- A Greek edition of BERT pre-trained language model☆149Updated last year
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.☆127Updated 4 years ago
- 📝Natural language processing (NLP) utils: word embeddings (Word2Vec, GloVe, FastText, ...) and preprocessing transformers, compatible wi…☆63Updated 2 years ago
- New dataset☆310Updated 4 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆108Updated 4 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆330Updated last year
- Spanish Billion Word Corpus and Embeddings☆51Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆119Updated 4 years ago
- A small repo showing how to easily use BERT (or other transformers) for inference☆99Updated 5 years ago