google-deepmind / lamb
LAnguage Modelling Benchmarks
☆137Updated 4 years ago
Alternatives and similar repositories for lamb:
Users that are interested in lamb are comparing it to the libraries listed below
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- ☆176Updated 4 years ago
- ☆83Updated 5 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- Code for NIPS 2018 paper 'Frequency-Agnostic Word Representation'☆115Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sen…☆124Updated 2 years ago
- Implementation of Universal Transformer in Pytorch☆259Updated 6 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 3 years ago
- PyTorch implementation of latent space reinforcement learning for E2E dialog published at NAACL 2019. It is released by Tiancheng Zhao (…☆144Updated 5 years ago
- ☆218Updated 4 years ago
- Meta learning for Neural Machine Translation☆41Updated 2 years ago
- Checking the interpretability of attention on text classification models☆48Updated 5 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- Implementation of the "Poincare Glove: Hyperbolic word embeddings" paper☆88Updated 4 years ago
- Neural Module Network for Reasoning over Text, ICLR 2020☆120Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆245Updated 5 years ago
- ☆47Updated 6 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆94Updated 5 years ago
- Bi-Directional Block Self-Attention☆123Updated 6 years ago
- An implementation of DeepMind's Relational Recurrent Neural Networks (NeurIPS 2018) in PyTorch.☆245Updated 6 years ago
- Source code for the paper "Hyperbolic Neural Networks", https://arxiv.org/abs/1805.09112☆175Updated 4 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 3 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆228Updated 5 years ago
- Code for "Language GANs Falling Short"☆59Updated 4 years ago
- Sparse and structured neural attention mechanisms☆223Updated 4 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 3 years ago
- Bayesian Deep Active Learning for Natural Language Processing Tasks☆147Updated 6 years ago