twistedcubic / attention-rank-collapseLinks
[ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.
☆164Updated 4 years ago
Alternatives and similar repositories for attention-rank-collapse
Users that are interested in attention-rank-collapse are comparing it to the libraries listed below
Sorting:
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated 2 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- NeurIPS 2020, Debiased Contrastive Learning☆282Updated 2 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆192Updated 2 years ago
- Fully featured implementation of Routing Transformer☆292Updated 3 years ago
- Implementation of Sparsemax activation in Pytorch☆160Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago
- PyTorch implementation of a Variational Autoencoder with Gumbel-Softmax Distribution☆208Updated 6 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆293Updated 3 weeks ago
- ☆81Updated 10 months ago
- Awesome Contrastive Learning for CV & NLP☆163Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Code used in "Understanding Dimensional Collapse in Contrastive Self-supervised Learning" paper.☆77Updated 2 years ago
- Crawl & visualize ICLR papers and reviews.☆107Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆117Updated 4 years ago
- Open source code for paper "Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere" ICML 2…☆444Updated 2 years ago
- MoCo with Alignment and Uniformity Loss.☆62Updated 3 years ago
- The newest reading list for representation learning☆115Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆174Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Loss and accuracy go opposite ways...right?☆93Updated 5 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆88Updated 4 years ago
- Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.☆143Updated last year
- ☆84Updated 4 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago