elementx-ai / albert-qa-demo
Reading comprehension with ALBERT transformer model
β15Updated 3 years ago
Alternatives and similar repositories for albert-qa-demo:
Users that are interested in albert-qa-demo are comparing it to the libraries listed below
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queriesβ19Updated 3 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- β66Updated 4 years ago
- Training a model without a dataset for natural language inference (NLI)β25Updated 4 years ago
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- On Generating Extended Summaries of Long Documentsβ78Updated 4 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering modelsβ57Updated last year
- QED: A Framework and Dataset for Explanations in Question Answeringβ116Updated 3 years ago
- Dataset of sentences from Hindi stories tagged with different emotion tagsβ10Updated 5 years ago
- Question generation from Reading Comprehensionβ18Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β146Updated 3 years ago
- Viewer for the π€ datasets library.β84Updated 3 years ago
- Boolean Question Answering with multi-task learning and uses large LM embeddings like BERT, RoBERTaβ18Updated 5 years ago
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β126Updated 4 years ago
- Dynamic ensemble decoding with transformer-based modelsβ29Updated last year
- Use-cases of Hugging Face's BERT (e.g. paraphrase generation, unsupervised extractive summarization).β20Updated 5 years ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.β36Updated last year
- A Benchmark Dataset for Understanding Disfluencies in Question Answeringβ62Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 4 years ago
- Introduction to the recently released T5 model from the paper - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Traβ¦β35Updated 4 years ago
- Code for bidirectional sequence generation (BiSon) for generating from BERT pre-trained models.β51Updated 5 years ago
- The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"β21Updated 4 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 2 years ago
- Pre-trained, multilingual sequence-to-sequence models for Indian languagesβ46Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- The implementation of the papers on dual learning of natural language understanding and generation. (ACL2019,2020; Findings of EMNLP 2020β¦β66Updated 4 years ago
- Official code and data repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies" (https://arxiv.org/abs/1906.02622)β¦β93Updated 8 months ago
- Training T5 to perform numerical reasoning.β23Updated 3 years ago
- Code and Data for Evaluation WGβ41Updated 2 years ago