lioutasb / TaLKConvolutionsLinks
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)
☆29Updated 5 years ago
Alternatives and similar repositories for TaLKConvolutions
Users that are interested in TaLKConvolutions are comparing it to the libraries listed below
Sorting:
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- ☆22Updated 4 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 5 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆31Updated 5 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- ☆13Updated 6 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- ☆20Updated 6 years ago
- Curriculum Learning related papers and materials☆53Updated 5 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- ☆16Updated 4 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 6 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)☆43Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- PyTorch Implementation of NeurIPS 2020 paper "Learning Sparse Prototypes for Text Generation"☆22Updated 4 years ago
- ☆53Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- ☆11Updated 5 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 6 years ago
- Hard-Coded Gaussian Attention for Neural Machine Translation☆36Updated 2 years ago
- Variational Transformers for Diverse Response Generation☆82Updated last year
- ☆62Updated 3 years ago
- Visually Grounded PCFG Induction☆39Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 6 years ago