AristotelisPap / Question-Answering-with-BERT-and-Knowledge-DistillationLinks
Fine-tuned BERT on SQuAd 2.0 Dataset. Applied Knowledge Distillation (KD) and fine-tuned DistilBERT (student) using BERT as the teacher model. Reduced the size of the original BERT by 40%.
☆26Updated 4 years ago
Alternatives and similar repositories for Question-Answering-with-BERT-and-Knowledge-Distillation
Users that are interested in Question-Answering-with-BERT-and-Knowledge-Distillation are comparing it to the libraries listed below
Sorting:
- A repository for our AAAI-2020 Cross-lingual-NER paper. Code will be updated shortly.☆47Updated 3 years ago
- ☆42Updated 5 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆136Updated 2 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆109Updated 5 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- ☆47Updated last week
- Source codes of Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction☆43Updated 4 years ago
- Lexical Simplification with Pretrained Encoders☆70Updated 4 years ago
- Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data☆100Updated 2 years ago
- The jiant toolkit for general-purpose text understanding models☆22Updated 5 years ago
- This is a simple implementation of how to leverage a Language Model for a prompt-based learning model☆45Updated 3 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 3 years ago
- ☆67Updated 4 years ago
- ☆42Updated 4 years ago
- Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".☆100Updated 2 years ago
- Official Implementation of "DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization."☆143Updated 3 years ago
- Improved version of GECToR☆61Updated 2 years ago
- BERTserini☆26Updated 3 years ago
- Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer (ACL 2021)☆30Updated 3 years ago
- DialogSum: A Real-life Scenario Dialogue Summarization Dataset - Findings of ACL 2021☆184Updated last year
- Language-agnostic BERT Sentence Embedding (LaBSE)☆153Updated 5 years ago
- Self-supervised NER prototype - updated version (69 entity types - 17 broad entity groups). Uses pretrained BERT models with no fine tuni…☆78Updated 3 years ago
- Named Entity Recognition with Pretrained XLM-RoBERTa☆92Updated 4 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 3 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆53Updated 3 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆153Updated 3 years ago
- The official code of the "Frustratingly Easy System Combination for Grammatical Error Correction" paper☆57Updated last year
- [EMNLP 2021] Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification☆131Updated 2 years ago
- [ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling☆72Updated 3 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 3 years ago