khakhulin / compressed-transformerLinks
Compression of NMT transformer model with tensor methods
☆48Updated 6 years ago
Alternatives and similar repositories for compressed-transformer
Users that are interested in compressed-transformer are comparing it to the libraries listed below
Sorting:
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- ☆64Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- ☆60Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- LAMB Optimizer for Large Batch Training (TensorFlow version)☆120Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- A smoother activation function (undergrad code)☆112Updated 5 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- ☆27Updated 5 years ago
- Implementation of Universal Transformer in Pytorch☆261Updated 6 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 6 years ago
- CUDA kernels for generalized matrix-multiplication in PyTorch☆85Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆61Updated last year
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆141Updated 3 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago
- ☆14Updated 6 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- Implementation of https://arxiv.org/abs/1904.00962☆376Updated 4 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆193Updated 3 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆182Updated 3 years ago
- ☆144Updated 2 years ago
- meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)☆110Updated 3 years ago