Skumarr53 / Attention-is-All-you-Need-PyTorchLinks
Repo has PyTorch implementation "Attention is All you Need - Transformers" paper for Machine Translation from French queries to English.
☆70Updated 5 years ago
Alternatives and similar repositories for Attention-is-All-you-Need-PyTorch
Users that are interested in Attention-is-All-you-Need-PyTorch are comparing it to the libraries listed below
Sorting:
- PyTorch Implementation of OpenAI's Image GPT☆260Updated 2 years ago
- Pytorch implementation of the image transformer for unconditional image generation☆118Updated last year
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆95Updated 2 years ago
- ☆76Updated 5 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆150Updated 4 years ago
- Pytorch-tensorboard simple tutorial and example for a beginner☆23Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Guide for both TensorFlow and PyTorch in comparative way☆108Updated 6 years ago
- ☆28Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- An education step by step implementation of SimCLR that accompanies the blogpost☆31Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- TensorFlow implementation of "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments".☆86Updated 3 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- Fully featured implementation of Routing Transformer☆297Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- Experiments with supervised contrastive learning methods with different loss functions☆222Updated 2 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- Implements the ideas presented in https://arxiv.org/pdf/2004.11362v1.pdf by Khosla et al.☆133Updated 5 years ago
- Notebook for comprehensive analysis of authors, organizations, and countries of ICML 2020 papers.☆56Updated 5 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆135Updated 4 years ago
- Deep Learning project template for PyTorch (multi-gpu training is supported)☆138Updated 2 years ago
- A PyTorch implementation of the paper Show, Attend and Tell: Neural Image Caption Generation with Visual Attention☆84Updated 6 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆133Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 4 years ago
- Experiments on different dataset on how to grow networks during training to learn new image categories.☆62Updated 5 years ago
- Implementation of modern data augmentation techniques in TensorFlow 2.x to be used in your training pipeline.☆34Updated 5 years ago
- Minimal implementation of SimSiam (https://arxiv.org/abs/2011.10566) in TensorFlow 2.☆98Updated 4 years ago