ymcui / LAMB_Optimizer_TF
LAMB Optimizer for Large Batch Training (TensorFlow version)
☆120Updated 5 years ago
Alternatives and similar repositories for LAMB_Optimizer_TF:
Users that are interested in LAMB_Optimizer_TF are comparing it to the libraries listed below
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.☆100Updated 6 years ago
- TensorFlow code and pre-trained models for BERT☆114Updated 5 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 5 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 5 years ago
- Bi-Directional Block Self-Attention☆123Updated 6 years ago
- PyTorch implementation of Transformer-based Neural Machine Translation☆77Updated 2 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆268Updated 3 years ago
- Reproducing Densely Interactive Inference Network in Keras☆74Updated 7 years ago
- ☆93Updated 3 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆309Updated last year
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆201Updated 5 years ago
- Sampled Softmax Implementation for PyTorch☆43Updated 7 years ago
- A PyTorch implementation of Attention is all you need☆42Updated 6 years ago
- Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling☆146Updated 5 years ago
- Knowledge Distillation For Transformer Language Models☆52Updated last year
- ☆74Updated 7 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆94Updated 5 years ago
- a simple yet complete implementation of the popular BERT model☆127Updated 4 years ago
- ☆395Updated 6 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- Code for NIPS 2018 paper 'Frequency-Agnostic Word Representation'☆115Updated 5 years ago
- ☆24Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Updated 3 years ago
- 高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task☆57Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- Code for Adversarial Training Methods for Semi-Supervised Text Classification☆123Updated 6 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 6 years ago