sooftware / attentionsLinks
PyTorch implementation of some attentions for Deep Learning Researchers.
☆546Updated 3 years ago
Alternatives and similar repositories for attentions
Users that are interested in attentions are comparing it to the libraries listed below
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Updated 4 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆827Updated last year
- Pytorch Lightning code guideline for conferences☆1,286Updated 2 years ago
- Pytorch library for fast transformer implementations☆1,762Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆422Updated 3 years ago
- An implementation of local windowed attention for language modeling☆496Updated 6 months ago
- Reformer, the efficient Transformer, in Pytorch☆2,193Updated 2 years ago
- Transformer implementation in PyTorch.☆492Updated 6 years ago
- Learning Rate Warmup in PyTorch☆415Updated 7 months ago
- Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"☆394Updated 4 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 3 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆802Updated last week
- Implementation of Linformer for Pytorch☆305Updated 2 years ago
- Early stopping for PyTorch☆1,270Updated last year
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆362Updated last year
- ☆468Updated 2 years ago
- An All-MLP solution for Vision, from Google AI☆1,056Updated 7 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,225Updated last year
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆181Updated 2 years ago
- Implementation of Transformer encoder in PyTorch☆71Updated 5 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,194Updated 2 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆139Updated 5 years ago
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆615Updated last year
- pytorch; mask language model ; bert☆73Updated 6 years ago
- Transformers for Longer Sequences☆628Updated 3 years ago
- ☆64Updated 5 years ago
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆430Updated 4 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago
- kmeans using PyTorch☆529Updated 2 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆261Updated 4 years ago