szhangtju / The-compression-of-Transformer
☆64Updated 4 years ago
Alternatives and similar repositories for The-compression-of-Transformer:
Users that are interested in The-compression-of-Transformer are comparing it to the libraries listed below
- Compression of NMT transformer model with tensor methods☆48Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- ☆83Updated 5 years ago
- ☆33Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Pytorch library for factorized L0-based pruning.☆44Updated last year
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- ☆27Updated 5 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated last year
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated last year
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 5 years ago
- ☆60Updated 4 years ago
- An implementation of various tensor-based decomposition for NN & RNN parameters☆18Updated 6 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆39Updated 3 years ago
- Reproduce the results of paper "Compressing Word Embeddings via Deep Compositional Code Learning" accepted ICLR 2018☆23Updated 6 years ago
- The code of Encoding Word Order in Complex-valued Embedding☆42Updated 5 years ago
- Code implementation of paper Towards A Deep and Unified Understanding of Deep Neural Models in NLP☆72Updated 5 years ago
- ☆22Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- ☆15Updated 3 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆85Updated 5 years ago
- Pytorch Implemetation for our NAACL2019 Paper "Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling" http…☆62Updated 4 years ago
- Codes for "Towards Binary-Valued Gates for Robust LSTM Training".☆76Updated 6 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago