lancopku / Explicit-Sparse-Transformer
code for Explicit Sparse Transformer
☆59Updated last year
Alternatives and similar repositories for Explicit-Sparse-Transformer:
Users that are interested in Explicit-Sparse-Transformer are comparing it to the libraries listed below
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- some examples for drawing illustration plots for paper using seaborn package☆14Updated 5 years ago
- ☆32Updated 3 years ago
- Mixture of Attention Heads☆41Updated 2 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆89Updated 3 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 3 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆19Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆110Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆49Updated 2 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆54Updated last year
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆24Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆72Updated 2 years ago
- [ICLR 2022] code for "Towards building a group-based unsupervised representation disentanglement framework"☆15Updated 2 years ago
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆38Updated last year
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated this week
- Variational Transformers for Diverse Response Generation☆80Updated 6 months ago
- Weighted Training for Cross-Task Learning☆15Updated 2 years ago