b-etienne / Seq2seq-PyTorchLinks
☆76Updated 5 years ago
Alternatives and similar repositories for Seq2seq-PyTorch
Users that are interested in Seq2seq-PyTorch are comparing it to the libraries listed below
Sorting:
- Using Pytorch's nn.Transformer module to create an english to french neural machine translation model.☆78Updated 5 years ago
- Sequence to Sequence Models in PyTorch☆44Updated last year
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆134Updated 5 years ago
- Minimal RNN classifier with self-attention in Pytorch☆152Updated 3 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆230Updated 6 years ago
- A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.☆180Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Minimalist implementation of a BERT Sentence Classifier with PyTorch Lightning, Transformers and PyTorch-NLP.☆219Updated 2 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆70Updated 4 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago
- ☆219Updated 5 years ago
- The Annotated Encoder Decoder with Attention☆166Updated 4 years ago
- Scripts to train a bidirectional LSTM with knowledge distillation from BERT☆158Updated 5 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- PyTorch implementation of beam search decoding for seq2seq models☆339Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Encoding position with the word embeddings.☆84Updated 7 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- Machine Translation using Transfromers☆29Updated 5 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆125Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- Empower Sequence Labeling with Task-Aware Neural Language Model | a PyTorch Tutorial to Sequence Labeling☆364Updated 5 years ago
- This is where I put all my work in Natural Language Processing☆97Updated 4 years ago
- Implementation of a linear-chain CRF in PyTorch☆97Updated 4 years ago
- ☆16Updated 7 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 6 years ago
- Semi Supervised Learning for Text-Classification☆83Updated 6 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago