CyberZHG / torch-multi-head-attentionLinks
Multi-head attention in PyTorch
☆153Updated 6 years ago
Alternatives and similar repositories for torch-multi-head-attention
Users that are interested in torch-multi-head-attention are comparing it to the libraries listed below
Sorting:
- Experiments with supervised contrastive learning methods with different loss functions☆220Updated 2 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆135Updated 4 years ago
- PyTorch implementation of Representation Learning with Contrastive Predictive Coding by Van den Oord et al. (2018)☆86Updated 3 years ago
- This in my Demo of Chen et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks" ICML 2018☆179Updated 3 years ago
- A pytorch &keras implementation and demo of Fastformer.☆189Updated 2 years ago
- Independent implementation of Supervised Contrastive Loss. Straight to the point and beyond☆81Updated 4 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- Loss and accuracy go opposite ways...right?☆95Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago
- Pytorch implementation of the GradNorm. GradNorm addresses the problem of balancing multiple losses for multi-task learning by learning a…☆269Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- This is a repository for Multi-task learning with toy data in Pytorch and Tensorflow☆136Updated 6 years ago
- A collection of awesome things about mixed sample data augmentation☆132Updated 5 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆78Updated 4 years ago
- ☆83Updated 5 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆165Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Implementation of RealFormer using pytorch☆100Updated 4 years ago
- A minimal pytorch package implementing a gradient reversal layer.☆158Updated 9 months ago
- Simple implement dilated LSTM, residual LSTM and Attention LSTM (follow the corresponding papers).☆17Updated 5 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 5 years ago
- Transformer/Transformer-XL/R-Transformer examples and explanations☆26Updated 3 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated 2 years ago
- Custom loss functions to use in (mainly) PyTorch.☆39Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆417Updated 3 years ago
- The Noise Contrastive Estimation for softmax output written in Pytorch☆319Updated 5 years ago
- A Tensorflow implementation of the paper arXiv:1604.03539☆133Updated 7 years ago
- About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf☆324Updated last year