elementx-ai / albert-qa-demoLinks
Reading comprehension with ALBERT transformer model
β15Updated 3 years ago
Alternatives and similar repositories for albert-qa-demo
Users that are interested in albert-qa-demo are comparing it to the libraries listed below
Sorting:
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- Viewer for the π€ datasets library.β85Updated 4 years ago
- Training a model without a dataset for natural language inference (NLI)β25Updated 5 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 4 years ago
- A Benchmark Dataset for Understanding Disfluencies in Question Answeringβ64Updated 4 years ago
- On Generating Extended Summaries of Long Documentsβ78Updated 4 years ago
- LM Pretraining with PyTorch/TPUβ136Updated 6 years ago
- β66Updated 5 years ago
- QED: A Framework and Dataset for Explanations in Question Answeringβ118Updated 4 years ago
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β127Updated 4 years ago
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queriesβ19Updated 3 years ago
- Code for obtaining the Curation Corpus abstractive text summarisation datasetβ127Updated 4 years ago
- Stanford's Alexa Prize socialbotβ133Updated 2 years ago
- This repository contains datasets and code for the paper "HINT3: Raising the bar for Intent Detection in the Wild" accepted at EMNLP-2020β¦β32Updated 4 years ago
- Question generation from Reading Comprehensionβ18Updated 3 years ago
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- Official code and data repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies" (https://arxiv.org/abs/1906.02622)β¦β93Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ156Updated last year
- KitanaQA: Adversarial training and data augmentation for neural question-answering modelsβ56Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- Question-answers, collected from Googleβ129Updated 4 years ago
- Dynamic ensemble decoding with transformer-based modelsβ29Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ137Updated 2 years ago
- AI apps/benchmark for legaltechβ112Updated 4 years ago
- Datasets I have created for scientific summarization, and a trained BertSum modelβ115Updated 6 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 3 years ago
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transferβ39Updated 5 years ago
- Training T5 to perform numerical reasoning.β24Updated 4 years ago
- Many Natural Language Processing tasks rely on sentence boundary detection (SBD). Although amazing libraries like spacy provide state of β¦β61Updated 5 years ago