ymcui / LAMB_Optimizer_TFLinks
LAMB Optimizer for Large Batch Training (TensorFlow version)
☆120Updated 5 years ago
Alternatives and similar repositories for LAMB_Optimizer_TF
Users that are interested in LAMB_Optimizer_TF are comparing it to the libraries listed below
Sorting:
- TensorFlow code and pre-trained models for BERT☆114Updated 5 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 5 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 5 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 3 years ago
- a simple yet complete implementation of the popular BERT model☆127Updated 5 years ago
- Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling☆146Updated 5 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆201Updated 5 years ago
- ☆24Updated 5 years ago
- ☆93Updated 3 years ago
- Bi-Directional Block Self-Attention☆122Updated 7 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆313Updated 2 years ago
- Reproducing Densely Interactive Inference Network in Keras☆75Updated 7 years ago
- multi-gpu pre-training in one machine for BERT without horovod (Data Parallelism)☆172Updated 3 months ago
- Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"☆75Updated 6 years ago
- The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.☆100Updated 6 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- 高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task☆58Updated 5 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆133Updated 4 years ago
- Code for Synchronous Bidirectional Neural Machine Translation (SB-NMT)☆66Updated 6 years ago
- Global-Locally Self-Attentive Dialogue State Tracker☆185Updated 3 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 4 years ago
- MAsked Sequence to Sequence (MASS) pre-training for language generation☆21Updated 6 years ago
- ☆395Updated 6 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- ☆74Updated 8 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 7 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆117Updated 4 years ago