yk / huggingface-nlp-demo
β46Updated 4 years ago
Alternatives and similar repositories for huggingface-nlp-demo:
Users that are interested in huggingface-nlp-demo are comparing it to the libraries listed below
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- Viewer for the π€ datasets library.β84Updated 3 years ago
- β87Updated 2 years ago
- QED: A Framework and Dataset for Explanations in Question Answeringβ116Updated 3 years ago
- ELECTRA MODEL NLPβ13Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β146Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ135Updated last year
- Helper scripts and notes that were used while porting various nlp modelsβ45Updated 3 years ago
- Dynamic ensemble decoding with transformer-based modelsβ29Updated last year
- Low-code pre-built pipelines for experiments with huggingface/transformers for Data Scientists in a rush.β16Updated 4 years ago
- This repository contains the code for "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Representations".β63Updated 4 years ago
- β21Updated 3 years ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.β36Updated last year
- KitanaQA: Adversarial training and data augmentation for neural question-answering modelsβ57Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- State of the art Semantic Sentence Embeddingsβ99Updated 2 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ32Updated 2 years ago
- Introduction to the recently released T5 model from the paper - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Traβ¦β35Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated last year
- BERT models for many languages created from Wikipedia textsβ33Updated 4 years ago
- LM Pretraining with PyTorch/TPUβ134Updated 5 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β26Updated 3 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch β¦β80Updated 2 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementationβ31Updated last year
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- β18Updated 3 years ago
- NLP Examples using the π€ librariesβ41Updated 4 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"β58Updated 3 years ago