lancopku / AdaNorm
Code for "Understanding and Improving Layer Normalization"
☆46Updated 5 years ago
Alternatives and similar repositories for AdaNorm:
Users that are interested in AdaNorm are comparing it to the libraries listed below
- ☆22Updated 3 years ago
- Variational Transformers for Diverse Response Generation☆80Updated 5 months ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆29Updated 4 years ago
- Tensorflow Implementation of "Theory and Experiments on Vector Quantized Autoencoders"☆14Updated 5 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- DisCo Transformer for Non-autoregressive MT☆78Updated 2 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- EMNLP 2018: Multi-Head Attention with Disagreement Regularization; NAACL 2019: Information Aggregation for Multi-Head Attention with Rout…☆19Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆72Updated 2 years ago
- ☆13Updated 5 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- This repo provides the code for the ACL 2020 paper "Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEnco…☆53Updated 4 years ago
- Curriculum Learning related papers and materials☆54Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆88Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- ☆63Updated 4 years ago
- Test implementation of "Aligned Cross Entropy for Non-Autoregressive Machine Translation" https://arxiv.org/abs/2004.01655☆21Updated 5 months ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- ☆53Updated 3 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- Implementation of Imputer: Sequence Modelling via Imputation and Dynamic Programming in PyTorch☆58Updated 4 years ago
- code for Explicit Sparse Transformer☆57Updated last year
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 3 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆38Updated 3 years ago
- ☆83Updated 5 years ago
- ☆120Updated 5 years ago
- ☆36Updated 4 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 4 years ago