xuanqing94 / FLOATER
Learning to Encode Position for Transformer with Continuous Dynamical Model
☆59Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for FLOATER
- code for Explicit Sparse Transformer☆56Updated last year
- ☆32Updated 3 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆27Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆43Updated 3 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- Mixture of Attention Heads☆39Updated 2 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- ☆83Updated 5 years ago
- code for paper "Improving Sequence-to-Sequence Learning via Optimal Transport"☆68Updated 5 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆53Updated 3 years ago
- Variational Transformers for Diverse Response Generation☆82Updated 3 months ago
- ☆22Updated 3 years ago
- Implementation of QKVAE☆11Updated last year
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 3 years ago
- PyTorch implementation of Pay Attention to MLPs☆39Updated 3 years ago
- ☆36Updated 4 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆35Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago
- Dispersed Exponential Family Mixture VAE☆27Updated 4 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)☆42Updated 2 years ago
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆29Updated 2 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- [ICML 2022] Latent Diffusion Energy-Based Model for Interpretable Text Modeling☆63Updated 2 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- Code to reproduce the results for Compositional Attention☆60Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆71Updated last year