ymcui / LAMB_Optimizer_TFLinks
LAMB Optimizer for Large Batch Training (TensorFlow version)
☆121Updated 5 years ago
Alternatives and similar repositories for LAMB_Optimizer_TF
Users that are interested in LAMB_Optimizer_TF are comparing it to the libraries listed below
Sorting:
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 6 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 6 years ago
- Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling☆147Updated 5 years ago
- Reproducing Densely Interactive Inference Network in Keras☆75Updated 7 years ago
- ☆93Updated 4 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆271Updated 3 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 7 years ago
- Knowledge Distillation For Transformer Language Models☆52Updated last year
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- An Implementation of Bidirectional Attention Flow☆40Updated 8 years ago
- PyTorch implementation of Transformer-based Neural Machine Translation☆78Updated 2 years ago
- a simple yet complete implementation of the popular BERT model☆128Updated 5 years ago
- 高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task☆60Updated 5 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.☆100Updated 7 years ago
- Code for Synchronous Bidirectional Neural Machine Translation (SB-NMT)☆66Updated 6 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- ☆74Updated 8 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- R-net in PyTorch, with ELMo☆198Updated 5 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 6 years ago
- This repo is not maintained. For latest version, please visit https://github.com/ictnlp. A collection of transformer's guides, implementa…☆44Updated 6 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- PyTorch implementation of Attention-over-Attention Neural Networks for Reading Comprehension☆59Updated 8 years ago
- TensorFlow code and pre-trained models for BERT☆116Updated 5 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆70Updated 4 years ago
- add BERT to encoder part for https://github.com/memray/seq2seq-keyphrase-pytorch☆80Updated 6 years ago
- Distilling BERT using natural language generation.☆38Updated 2 years ago