Skumarr53 / Attention-is-All-you-Need-PyTorchLinks
Repo has PyTorch implementation "Attention is All you Need - Transformers" paper for Machine Translation from French queries to English.
☆70Updated 5 years ago
Alternatives and similar repositories for Attention-is-All-you-Need-PyTorch
Users that are interested in Attention-is-All-you-Need-PyTorch are comparing it to the libraries listed below
Sorting:
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Pytorch-tensorboard simple tutorial and example for a beginner☆23Updated 5 years ago
- ☆76Updated 5 years ago
- PyTorch Implementation of OpenAI's Image GPT☆260Updated 2 years ago
- ☆28Updated 5 years ago
- An education step by step implementation of SimCLR that accompanies the blogpost☆31Updated 3 years ago
- Experiments with supervised contrastive learning methods with different loss functions☆221Updated 2 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆150Updated 4 years ago
- Pytorch implementation of the image transformer for unconditional image generation☆118Updated last year
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- Notebook for comprehensive analysis of authors, organizations, and countries of ICML 2020 papers.☆56Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆135Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆41Updated 4 years ago
- Recurrent neural networks: building a custom LSTM/GRU cell in PyTorch☆28Updated 5 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆93Updated 2 years ago
- Implements the ideas presented in https://arxiv.org/pdf/2004.11362v1.pdf by Khosla et al.☆133Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- My solutions for Assignments of CS231n: Convolutional Neural Networks for Visual Recognition☆42Updated 7 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆166Updated 4 years ago
- Official TensorFlow code for the paper "Efficient-CapsNet: Capsule Network with Self-Attention Routing".☆272Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- BERT + Image Captioning☆134Updated 4 years ago
- Fully featured implementation of Routing Transformer☆296Updated 3 years ago
- TensorFlow implementation of "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments".☆86Updated 3 years ago
- Awesome Contrastive Learning for CV & NLP☆165Updated 4 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆69Updated 2 years ago
- All about attention in neural networks. Soft attention, attention maps, local and global attention and multi-head attention.☆234Updated 5 years ago