astorfi / sequence-to-sequence-from-scratchLinks
Sequence to Sequence from Scratch Using Pytorch
☆123Updated 5 years ago
Alternatives and similar repositories for sequence-to-sequence-from-scratch
Users that are interested in sequence-to-sequence-from-scratch are comparing it to the libraries listed below
Sorting:
- The Annotated Encoder Decoder with Attention☆166Updated 4 years ago
- ☆54Updated 6 years ago
- Scripts to train a bidirectional LSTM with knowledge distillation from BERT☆158Updated 5 years ago
- Explains nlp building blocks in a simple manner.☆251Updated 5 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 5 years ago
- Demonstration of the results in "Text Normalization using Memory Augmented Neural Networks", Authors: Subhojeet Pramanik, Aman Hussain☆60Updated 5 years ago
- Implementation of papers on Deep Seq2seq learning using Pytorch.☆219Updated 6 years ago
- Sequence to Sequence Models in PyTorch☆44Updated 10 months ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- Pre-training of Language Models for Language Understanding☆83Updated 5 years ago
- Code examples for CMU CS11-731, Machine Translation and Sequence-to-sequence Models☆35Updated 5 years ago
- ☆76Updated 5 years ago
- MT Tutorial for the JSALT 2019 Summer School☆48Updated 6 years ago
- Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText. [IN PROGRESS]☆22Updated 5 years ago
- Encoding position with the word embeddings.☆83Updated 7 years ago
- I try my best to keep updated cutting-edge knowledge in Machine Learning/Deep Learning and Natural Language Processing. These are my not…☆283Updated 6 years ago
- LectureBank Dataset☆134Updated last year
- Fork of huggingface/pytorch-pretrained-BERT for BERT on STILTs☆107Updated 2 years ago
- Text generation with a Variational Autoencoder☆64Updated 6 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆126Updated 3 years ago
- ☆82Updated 3 years ago
- Language Model Fine-tuning for Moby Dick☆42Updated 6 years ago
- LM, ULMFit et al.☆46Updated 5 years ago
- LAnguage Modelling Benchmarks☆137Updated 5 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Datasets I have created for scientific summarization, and a trained BertSum model☆115Updated 5 years ago
- Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Process…☆251Updated 7 years ago
- Assessing syntactic abilities of BERT☆39Updated 5 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 4 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆61Updated 6 years ago