hendrycks / GELUs
A smoother activation function (undergrad code)
☆109Updated 4 years ago
Alternatives and similar repositories for GELUs:
Users that are interested in GELUs are comparing it to the libraries listed below
- Sparse and structured neural attention mechanisms☆223Updated 4 years ago
- Adaptive Softmax implementation for PyTorch☆80Updated 5 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 8 months ago
- Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation pr…☆45Updated 5 years ago
- PyTorch implementation of Global Vectors for Word Representation.☆92Updated 7 years ago
- Implements pytorch code for the Accelerated SGD algorithm.☆215Updated 7 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆189Updated 2 years ago
- Jupyter notebook on Gumbel-max and Gumbel-softmax tricks☆41Updated 2 years ago
- ☆26Updated 5 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 5 years ago
- Checking the interpretability of attention on text classification models☆48Updated 5 years ago
- Decoupled Weight Decay Regularization (ICLR 2019)☆273Updated 6 years ago
- SparseMAP: differentiable sparse structure inference☆111Updated 6 years ago
- A PyTorch Implementation of "Quasi-Recurrent Neural Networks"☆46Updated 7 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- Implementation of Sparsemax activation in Pytorch☆159Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated last year
- Implementation of "Learning with Random Learning Rates" in PyTorch.☆102Updated 5 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 4 years ago
- LAnguage Modelling Benchmarks☆137Updated 4 years ago
- A PyTorch implementation of the Transformer model from "Attention Is All You Need".☆59Updated 5 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 2 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 3 years ago
- ☆64Updated 5 years ago
- Codes for "Towards Binary-Valued Gates for Robust LSTM Training".☆76Updated 6 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- Recurrent Variational Autoencoder with Dilated Convolutions that generates sequential data implemented in pytorch☆71Updated 3 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 5 years ago