ymcui / LAMB_Optimizer_TF
LAMB Optimizer for Large Batch Training (TensorFlow version)
☆120Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for LAMB_Optimizer_TF
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 5 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- TensorFlow code and pre-trained models for BERT☆114Updated 4 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 2 years ago
- The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.☆101Updated 6 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 4 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆92Updated 6 years ago
- Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling☆146Updated 4 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆93Updated 4 years ago
- A PyTorch implementation of Attention is all you need☆42Updated 6 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆169Updated 4 years ago
- PyTorch implementation of Transformer-based Neural Machine Translation☆77Updated last year
- Sampled Softmax Implementation for PyTorch☆43Updated 6 years ago
- multi-gpu pre-training in one machine for BERT from scratch without horovod (Data Parallelism)☆173Updated last month
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆125Updated 3 years ago
- Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"☆76Updated 5 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆199Updated 5 years ago
- ☆75Updated 7 years ago
- Bi-Directional Block Self-Attention☆124Updated 6 years ago
- Improving the Transformer translation model with document-level context☆172Updated 4 years ago
- ☆94Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆115Updated 4 years ago
- 高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task☆57Updated 4 years ago
- Fork of huggingface/pytorch-pretrained-BERT for BERT on STILTs☆106Updated 2 years ago
- Code for Synchronous Bidirectional Neural Machine Translation (SB-NMT)☆66Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆87Updated last year
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆225Updated 3 years ago