AristotelisPap / Question-Answering-with-BERT-and-Knowledge-Distillation
Fine-tuned BERT on SQuAd 2.0 Dataset. Applied Knowledge Distillation (KD) and fine-tuned DistilBERT (student) using BERT as the teacher model. Reduced the size of the original BERT by 40%.
☆25Updated 4 years ago
Alternatives and similar repositories for Question-Answering-with-BERT-and-Knowledge-Distillation:
Users that are interested in Question-Answering-with-BERT-and-Knowledge-Distillation are comparing it to the libraries listed below
- ☆42Updated 4 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆80Updated 2 years ago
- Source codes of Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction☆43Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆52Updated 2 years ago
- The sources codes of the DR-BERT model and baselines☆37Updated 3 years ago
- Code for "A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies."☆28Updated 3 years ago
- [ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling☆71Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆135Updated last year
- Abstractive summarization using Bert2Bert framework.☆31Updated 4 years ago
- reference pytorch code for named entity tagging☆85Updated 5 months ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆47Updated 2 years ago
- This is a simple implementation of how to leverage a Language Model for a prompt-based learning model☆44Updated 3 years ago
- ☆66Updated 3 years ago
- Source codes for the paper "Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization"☆91Updated last year
- Use BERT for Question Answering and finetune train with SQuAD 2.0☆15Updated 5 years ago
- A repository for our AAAI-2020 Cross-lingual-NER paper. Code will be updated shortly.☆47Updated 2 years ago
- ☆92Updated 3 years ago
- This is the official code for Extractive Summarization of Long Documents by Combining Global and Local Context☆69Updated 4 years ago
- The PyTorch implementation of ReCoSa(the Relevant Contexts with Self-attention) for dialogue generation using the multi-head attention an…☆22Updated last year
- Question answering on multiparty dialogue☆45Updated 4 years ago
- Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer (ACL 2021)☆30Updated 2 years ago
- Prompt-learning methods used BERT4Keras (PET, EFL and NSP-BERT), both for Chinese and English.☆29Updated 2 years ago
- reference pytorch code for intent classification☆44Updated 5 months ago
- Implementation of paper "Learning to Encode Text as Human-Readable Summaries using GAN"☆66Updated 5 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆120Updated 4 years ago
- Zero-shot dialogue state tracking (DST)☆82Updated 3 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 2 years ago
- ☆27Updated 3 months ago
- Entity extraction using BERT + CRF for single-tun / multi-turn setting in dialogues☆30Updated 3 years ago