snapthat / TF-T5-text-to-text
This repository demonstrate training T5 transformers using tensorflow 2
☆13Updated 4 years ago
Alternatives and similar repositories for TF-T5-text-to-text:
Users that are interested in TF-T5-text-to-text are comparing it to the libraries listed below
- QED: A Framework and Dataset for Explanations in Question Answering☆116Updated 3 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+☆37Updated 4 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering models☆57Updated last year
- Tutorial for first time BERT users,☆103Updated 2 years ago
- LongSumm - Scientific Document Summarization Task☆74Updated 2 years ago
- This repository contains the code for "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Representations".☆63Updated 4 years ago
- Dynamic ensemble decoding with transformer-based models☆29Updated last year
- Zero-shot Transfer Learning from English to Arabic☆29Updated 2 years ago
- Training T5 to perform numerical reasoning.☆24Updated 3 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆90Updated last month
- On Generating Extended Summaries of Long Documents☆78Updated 4 years ago
- Corresponding code repo for the paper at COLING 2020 - ARGMIN 2020: "DebateSum: A large-scale argument mining and summarization dataset"☆54Updated 3 years ago
- ☆66Updated 4 years ago
- The NewSHead dataset is a multi-doc headline dataset used in NHNet for training a headline summarization model.☆37Updated 3 years ago
- Introduction to the recently released T5 model from the paper - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Tra…☆35Updated 4 years ago
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- SUPERT: Unsupervised multi-document summarization evaluation & generation☆94Updated 2 years ago
- Python-based implementation of the Translate-Align-Retrieve method to automatically translate the SQuAD Dataset to Spanish.☆59Updated 2 years ago
- Training a model without a dataset for natural language inference (NLI)☆25Updated 4 years ago
- Reading comprehension with ALBERT transformer model☆15Updated 3 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 4 years ago
- Use-cases of Hugging Face's BERT (e.g. paraphrase generation, unsupervised extractive summarization).☆20Updated 5 years ago
- Source code for our AAAI 2020 paper P-SIF: Document Embeddings using Partition Averaging☆34Updated 4 years ago
- Paraphrase any question with T5 (Text-To-Text Transfer Transformer) - Pretrained model and training script provided☆187Updated last year
- A tutorial of pertaining Bert on your own dataset using google TPU☆44Updated 4 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Implementation, trained models and result data for the paper "Aspect-based Document Similarity for Research Papers" #COLING2020☆62Updated 11 months ago
- shabeelkandi / Handling-Out-of-Vocabulary-Words-in-Natural-Language-Processing-using-Language-Modelling☆69Updated 5 years ago
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statement…☆16Updated 3 years ago
- Code to reproduce the experiments from the paper.☆102Updated last year