yaohungt / TransformerDissectionLinks
[EMNLP'19] Summary for Transformer Understanding
☆53Updated 6 years ago
Alternatives and similar repositories for TransformerDissection
Users that are interested in TransformerDissection are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 6 years ago
- Pytorch Implemetation for our NAACL2019 Paper "Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling" http…☆63Updated 5 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆153Updated 2 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆147Updated 6 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 5 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- Axial Positional Embedding for Pytorch☆84Updated 10 months ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 5 years ago
- Hyperbolic Neural Networks, pytorch☆87Updated 6 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆65Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 2 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- ☆64Updated 5 years ago
- Code to reproduce the results for Compositional Attention☆59Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Updated last month
- Code for reversible recurrent neural networks☆40Updated 6 years ago
- Reparameterize your PyTorch modules☆71Updated 4 years ago