szhangtju / The-compression-of-TransformerLinks
☆64Updated 4 years ago
Alternatives and similar repositories for The-compression-of-Transformer
Users that are interested in The-compression-of-Transformer are comparing it to the libraries listed below
Sorting:
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- ☆83Updated 5 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 5 years ago
- ☆27Updated 5 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- ☆33Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- An implementation of various tensor-based decomposition for NN & RNN parameters☆18Updated 7 years ago
- ☆60Updated 4 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆140Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Sequence-Level Mixed Sample Data Augmentation☆21Updated 4 years ago
- Reproduce the results of paper "Compressing Word Embeddings via Deep Compositional Code Learning" accepted ICLR 2018☆23Updated 7 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆61Updated 3 years ago
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 5 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆40Updated 4 years ago
- EMNLP 2018: Multi-Head Attention with Disagreement Regularization; NAACL 2019: Information Aggregation for Multi-Head Attention with Rout…☆21Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆63Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- Implementation of Sparsemax activation in Pytorch☆160Updated 5 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)