Skumarr53 / Attention-is-All-you-Need-PyTorchLinks
Repo has PyTorch implementation "Attention is All you Need - Transformers" paper for Machine Translation from French queries to English.
☆70Updated 5 years ago
Alternatives and similar repositories for Attention-is-All-you-Need-PyTorch
Users that are interested in Attention-is-All-you-Need-PyTorch are comparing it to the libraries listed below
Sorting:
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- ☆28Updated 5 years ago
- Pytorch implementation of the image transformer for unconditional image generation☆118Updated last year
- ☆76Updated 5 years ago
- Guide for both TensorFlow and PyTorch in comparative way☆108Updated 6 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Experiments with supervised contrastive learning methods with different loss functions☆221Updated 2 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆93Updated 2 years ago
- BERT + Image Captioning☆134Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago
- An education step by step implementation of SimCLR that accompanies the blogpost☆31Updated 3 years ago
- PyTorch Implementation of OpenAI's Image GPT☆258Updated last year
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- Multi GPU Training Code for Deep Learning with PyTorch☆207Updated 6 months ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆166Updated 4 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆148Updated 4 years ago
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆135Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆70Updated 4 years ago
- A PyTorch implementation of the paper Show, Attend and Tell: Neural Image Caption Generation with Visual Attention☆84Updated 5 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- Fully featured implementation of Routing Transformer☆298Updated 3 years ago
- An implementation of drophead regularization for pytorch transformers☆19Updated 4 years ago
- Pytorch-tensorboard simple tutorial and example for a beginner☆23Updated 5 years ago
- Transformer-based Conditional Variational Autoencoder for Controllable Story Generation☆158Updated 3 years ago
- Implementing Pyramid Scene Parsing Network (PSPNet) paper using Pytorch☆16Updated 5 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆54Updated 4 years ago
- Awesome Contrastive Learning for CV & NLP☆164Updated 4 years ago
- ☆53Updated 4 years ago