lancopku / Explicit-Sparse-TransformerLinks
code for Explicit Sparse Transformer
☆62Updated last year
Alternatives and similar repositories for Explicit-Sparse-Transformer
Users that are interested in Explicit-Sparse-Transformer are comparing it to the libraries listed below
Sorting:
- Mixture of Attention Heads☆44Updated 2 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- ☆33Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆41Updated 4 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆22Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆52Updated 2 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- [ICML 2022] Latent Diffusion Energy-Based Model for Interpretable Text Modeling☆65Updated 2 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 4 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆11Updated 5 months ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆68Updated 3 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆42Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆117Updated 4 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆41Updated 3 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆39Updated 4 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆56Updated last year
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 4 years ago
- Variational Transformers for Diverse Response Generation☆81Updated 10 months ago
- ☆36Updated 4 years ago
- ☆20Updated 5 years ago
- This is the code accompanying the AAAI 2022 paper "Ranking Info Noise Contrastive Estimation: Boosting Contrastive Learning via Ranked Po…☆25Updated 2 years ago