xuanqing94 / FLOATERLinks
Learning to Encode Position for Transformer with Continuous Dynamical Model
☆60Updated 5 years ago
Alternatives and similar repositories for FLOATER
Users that are interested in FLOATER are comparing it to the libraries listed below
Sorting:
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- ☆33Updated 4 years ago
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆34Updated 2 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Variational Transformers for Diverse Response Generation☆81Updated last year
- How Does Selective Mechanism Improve Self-attention Networks?☆29Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)☆43Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- ☆64Updated 4 years ago
- ☆83Updated 5 years ago
- code for paper "Improving Sequence-to-Sequence Learning via Optimal Transport"☆68Updated 6 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆120Updated 4 years ago
- Implementation of QKVAE☆11Updated 2 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- Code to reproduce the results for Compositional Attention☆60Updated 2 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆166Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆57Updated last year
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- ☆81Updated last year
- Code for the PAPA paper☆27Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆47Updated 4 years ago
- ☆20Updated 5 years ago
- [ICML 2022] Latent Diffusion Energy-Based Model for Interpretable Text Modeling☆66Updated 3 years ago