xuanqing94 / FLOATER
Learning to Encode Position for Transformer with Continuous Dynamical Model
☆59Updated 4 years ago
Alternatives and similar repositories for FLOATER:
Users that are interested in FLOATER are comparing it to the libraries listed below
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 3 years ago
- code for Explicit Sparse Transformer☆60Updated last year
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- ☆33Updated 3 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆41Updated 2 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)☆43Updated 2 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆39Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- [ICML 2022] Latent Diffusion Energy-Based Model for Interpretable Text Modeling☆65Updated 2 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- ☆22Updated 3 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- some examples for drawing illustration plots for paper using seaborn package☆14Updated 5 years ago
- Mixture of Attention Heads☆41Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- ☆83Updated 5 years ago
- ☆152Updated 3 years ago
- ☆20Updated 5 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 3 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 2 years ago
- Weighted Training for Cross-Task Learning☆15Updated 2 years ago
- Variational Transformers for Diverse Response Generation☆80Updated 7 months ago
- ☆22Updated 2 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated last year
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆85Updated 2 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- ☆37Updated 2 years ago