pren1 / A_Pipeline_Of_Pretraining_Bert_On_Google_TPULinks
A tutorial of pertaining Bert on your own dataset using google TPU
☆44Updated 5 years ago
Alternatives and similar repositories for A_Pipeline_Of_Pretraining_Bert_On_Google_TPU
Users that are interested in A_Pipeline_Of_Pretraining_Bert_On_Google_TPU are comparing it to the libraries listed below
Sorting:
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 5 years ago
- Self-supervised NER prototype - updated version (69 entity types - 17 broad entity groups). Uses pretrained BERT models with no fine tuni…☆78Updated 3 years ago
- BERT which stands for Bidirectional Encoder Representations from Transformations is the SOTA in Transfer Learning in NLP.☆56Updated 5 years ago
- Code for the paper "Efficient Adaption of Pretrained Transformers for Abstractive Summarization"☆71Updated 6 years ago
- reference pytorch code for named entity tagging☆87Updated last year
- Code for cross-sentence grammatical error correction using multilayer convolutional seq2seq models (ACL 2019)☆50Updated 5 years ago
- Minimal Interactive Attention Visualization☆140Updated 5 years ago
- architectures and pre-trained models for long document classification.☆154Updated 5 years ago
- This repository contains various ways to calculate sentence vector similarity using NLP models☆198Updated 5 years ago
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- This repository contains the corpora and supplementary data, along with instructions for recreating the experiments, for our paper: "End-…☆90Updated 5 years ago
- ☆40Updated 4 years ago
- reference pytorch code for intent classification☆44Updated last year
- Massively Multilingual Transfer for NER☆86Updated 4 years ago
- SUPERT: Unsupervised multi-document summarization evaluation & generation☆96Updated 3 years ago
- Zero-shot Transfer Learning from English to Arabic☆30Updated 3 years ago
- ☆65Updated 5 years ago
- Named Entity Recognition with Pretrained XLM-RoBERTa☆92Updated 4 years ago
- Fine-tuned Transformers compatible BERT models for Sequence Tagging☆40Updated 5 years ago
- We summarize the summarization papers presented at major conferences (starting with ACL 2019)☆84Updated 6 years ago
- Team Kakao&Brain's Grammatical Error Correction System for the ACL 2019 BEA Shared Task☆92Updated 6 years ago
- Formate converter from one type of qa task datasets to another type☆39Updated 7 years ago
- Use BERT to Fill in the Blanks☆84Updated 4 years ago
- ACL 2019: Zero-shot Word Sense Disambiguation using Sense Definition Embedding☆75Updated last year
- Python code for various NLP metrics☆169Updated 6 years ago
- 1. Pretrain Albert on custom corpus 2. Finetune the pretrained Albert model on downstream task☆33Updated 5 years ago
- Official code and data repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies" (https://arxiv.org/abs/1906.02622)…☆92Updated last year
- ☆33Updated 7 years ago
- A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.☆96Updated 4 years ago
- Implementation of paper "Learning to Encode Text as Human-Readable Summaries using GAN"