Inyrkz / Question-Answering-using-BERTLinks
Use BERT for Question Answering and finetune train with SQuAD 2.0
☆15Updated 5 years ago
Alternatives and similar repositories for Question-Answering-using-BERT
Users that are interested in Question-Answering-using-BERT are comparing it to the libraries listed below
Sorting:
- A repository for our AAAI-2020 Cross-lingual-NER paper. Code will be updated shortly.☆47Updated 2 years ago
- Question Answering task using Deep Learning on SQuAD dataset☆21Updated 2 years ago
- Code for Paper "Target-oriented Fine-tuning for Zero-Resource Named Entity Recognition"☆21Updated 3 years ago
- Implementation of paper "Learning to Encode Text as Human-Readable Summaries using GAN"☆66Updated 5 years ago
- BERT which stands for Bidirectional Encoder Representations from Transformations is the SOTA in Transfer Learning in NLP.☆56Updated 5 years ago
- This is the official code for Extractive Summarization of Long Documents by Combining Global and Local Context☆69Updated 4 years ago
- MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems☆67Updated 3 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- Code for the paper "Efficient Adaption of Pretrained Transformers for Abstractive Summarization"☆71Updated 6 years ago
- Albert for Conversational Question Answering Challenge☆23Updated 2 years ago
- ☆57Updated 4 years ago
- Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems☆22Updated 4 years ago
- RefNet for Question Generation☆47Updated 4 years ago
- BERT for joint intent classification and slot filling☆39Updated 6 years ago
- Uses GloVe embeddings and greedy sequence segmentation to semantically segment a text document into any number of k segments.☆33Updated 6 years ago
- ☆52Updated 4 years ago
- Joint Extraction & Compression text Summarization☆40Updated 5 years ago
- ☆33Updated 7 years ago
- This is an example program illustrating BERTs masked language model.☆28Updated 5 years ago
- Fine-tuned BERT on SQuAd 2.0 Dataset. Applied Knowledge Distillation (KD) and fine-tuned DistilBERT (student) using BERT as the teacher m…☆25Updated 4 years ago
- The implementation of the papers on dual learning of natural language understanding and generation. (ACL2019,2020; Findings of EMNLP 2020…☆67Updated 4 years ago
- https://arxiv.org/pdf/1909.04054☆79Updated 2 years ago
- ☆76Updated 2 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆53Updated 3 years ago
- Using BERT for doing the task of Conditional Natural Language Generation by fine-tuning pre-trained BERT on custom dataset.☆41Updated 5 years ago
- A Pointer Generator with a BERT encoder☆10Updated 6 years ago
- Zero-shot Transfer Learning from English to Arabic☆30Updated 3 years ago
- Code for cross-sentence grammatical error correction using multilayer convolutional seq2seq models (ACL 2019)☆50Updated 5 years ago
- Self-supervised NER prototype - updated version (69 entity types - 17 broad entity groups). Uses pretrained BERT models with no fine tuni…☆78Updated 3 years ago
- Named Entity Recognition with Pretrained XLM-RoBERTa☆91Updated 4 years ago