cerebroai / reformersLinks
Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing
☆95Updated 5 years ago
Alternatives and similar repositories for reformers
Users that are interested in reformers are comparing it to the libraries listed below
Sorting:
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆62Updated 7 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- LAMB Optimizer for Large Batch Training (TensorFlow version)☆121Updated 5 years ago
- ☆47Updated 6 years ago
- LAnguage Modelling Benchmarks☆138Updated 5 years ago
- Position embedding layers in Keras☆58Updated 3 years ago
- Scripts to train a bidirectional LSTM with knowledge distillation from BERT☆159Updated 6 years ago
- Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"☆75Updated 6 years ago
- ☆220Updated 5 years ago
- Encoding position with the word embeddings.☆84Updated 7 years ago
- Knowledge Distillation For Transformer Language Models☆53Updated 2 years ago
- Adaptive embedding and softmax☆17Updated 3 years ago
- CapsNet for NLP☆66Updated 6 years ago
- Re-implementation of ELMo on Keras☆135Updated 2 years ago
- Implementation of Universal Transformer in Pytorch☆265Updated 7 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 7 years ago
- Fork of huggingface/pytorch-pretrained-BERT for BERT on STILTs☆106Updated 3 years ago
- ☆38Updated 8 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 3 years ago
- BERT Extension in TensorFlow☆30Updated 6 years ago
- TensorFlow implementation of 'Ask Me Anything: Dynamic Memory Networks for Natural Language Processing (2015)'☆42Updated 7 years ago
- Keras implementation of “Gated Linear Unit ”☆23Updated last year
- Bi-Directional Block Self-Attention☆122Updated 7 years ago
- Tensorflow Implementation of Variational Attention for Sequence to Sequence Models (COLING 2018)☆72Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- Multiple Different Natural Language Processing Tasks in a Single Deep Model☆48Updated 7 years ago
- [NeurIPS 2019] Spherical Text Embedding☆183Updated 2 years ago
- Text Generation Using A Variational Autoencoder☆110Updated 8 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆231Updated 6 years ago
- XLNet for generating language.☆166Updated 4 years ago