sooftware / attentionsLinks
PyTorch implementation of some attentions for Deep Learning Researchers.
☆548Updated 3 years ago
Alternatives and similar repositories for attentions
Users that are interested in attentions are comparing it to the libraries listed below
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,168Updated 3 years ago
- Transformer implementation in PyTorch.☆490Updated 6 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆815Updated last year
- Pytorch Lightning code guideline for conferences☆1,280Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆421Updated 3 years ago
- Pytorch library for fast transformer implementations☆1,754Updated 2 years ago
- Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"☆384Updated 4 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆139Updated 4 years ago
- Learning Rate Warmup in PyTorch☆414Updated 5 months ago
- An implementation of local windowed attention for language modeling☆489Updated 4 months ago
- Contrastive Predictive Coding for Automatic Speaker Verification☆504Updated 6 years ago
- Implementation of Linformer for Pytorch☆302Updated last year
- LSTM, RNN and GRU implementations using Pytorch☆68Updated 4 years ago
- ☆468Updated 2 years ago
- An All-MLP solution for Vision, from Google AI☆1,053Updated 5 months ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 4 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,190Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆780Updated 4 months ago
- Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)☆1,093Updated 8 months ago
- Simple pytorch implementation of focal loss☆86Updated 2 years ago
- An (unofficial) implementation of Focal Loss, as described in the RetinaNet paper, generalized to the multi-class case.☆239Updated last year
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆359Updated last year
- Multi-head attention in PyTorch☆154Updated 6 years ago
- ☆64Updated 5 years ago
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆429Updated 4 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆613Updated 3 years ago
- PyTorch implementation of the InfoNCE loss for self-supervised learning.☆602Updated 2 years ago
- A simple tutorial of Variational AutoEncoders with Pytorch☆426Updated last year
- PyTorch implementation of some learning rate schedulers for deep learning researcher.