itsuncheng / fine-tuning-GPT2
Codebase for the Medium Article on Fine-tuning GPT2 for Text Generation
☆69Updated 4 years ago
Alternatives and similar repositories for fine-tuning-GPT2:
Users that are interested in fine-tuning-GPT2 are comparing it to the libraries listed below
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆80Updated 2 years ago
- QED: A Framework and Dataset for Explanations in Question Answering☆116Updated 3 years ago
- Coreference resolution with different higher-order inference methods; implemented in PyTorch.☆36Updated last year
- SUPERT: Unsupervised multi-document summarization evaluation & generation☆94Updated 2 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 4 years ago
- Fine-tuning GPT-2 Small for Question Answering☆129Updated 2 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆52Updated 2 years ago
- ☆77Updated 10 months ago
- This is the official code for Extractive Summarization of Long Documents by Combining Global and Local Context☆69Updated 4 years ago
- Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"☆66Updated 3 years ago
- Abstractive text summarization by fine-tuning seq2seq models.☆37Updated 4 years ago
- Lexical Simplification with Pretrained Encoders☆70Updated 4 years ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆157Updated 2 years ago
- ☆76Updated 2 years ago
- MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance☆205Updated last year
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆132Updated last year
- An original implementation of EMNLP 2020, "AmbigQA: Answering Ambiguous Open-domain Questions"☆118Updated 2 years ago
- Tutorial for first time BERT users,☆103Updated 2 years ago
- Long-context pretrained encoder-decoder models☆94Updated 2 years ago
- Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper☆146Updated last year
- Named Entity Recognition with Pretrained XLM-RoBERTa☆88Updated 3 years ago
- EMNLP 2020 - Summarizing Text on Any Aspects☆37Updated 4 years ago
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆34Updated 3 years ago
- Use BERT to Fill in the Blanks☆82Updated 3 years ago
- The accompanying code for "Injecting Numerical Reasoning Skills into Language Models" (Mor Geva*, Ankit Gupta* and Jonathan Berant, ACL 2…☆89Updated 7 months ago
- ☆45Updated last year
- Paraphrase any question with T5 (Text-To-Text Transfer Transformer) - Pretrained model and training script provided☆187Updated last year
- Code for the CIKM 2019 Paper: How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations☆31Updated last year
- The code repository for the paper "Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization".☆24Updated 4 years ago
- ☆30Updated 4 years ago