khakhulin / compressed-transformerLinks
Compression of NMT transformer model with tensor methods
☆47Updated 6 years ago
Alternatives and similar repositories for compressed-transformer
Users that are interested in compressed-transformer are comparing it to the libraries listed below
Sorting:
- ☆64Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- CUDA kernels for generalized matrix-multiplication in PyTorch☆85Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134Updated last year
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- ☆59Updated 5 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- ☆27Updated 6 years ago
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Implementation of Universal Transformer in Pytorch☆264Updated 7 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"