cybertronai / pytorch-lambLinks
Implementation of https://arxiv.org/abs/1904.00962
☆376Updated 4 years ago
Alternatives and similar repositories for pytorch-lamb
Users that are interested in pytorch-lamb are comparing it to the libraries listed below
Sorting:
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆415Updated last year
- Fully featured implementation of Routing Transformer☆298Updated 3 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 4 years ago
- Experimental ground for optimizing memory of pytorch models☆367Updated 7 years ago
- pytorch implement of Lookahead Optimizer☆194Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆269Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆420Updated 3 years ago
- Implementation for the Lookahead Optimizer.☆243Updated 3 years ago
- Accelerate training by storing parameters in one contiguous chunk of memory.☆291Updated 4 years ago
- Fast Block Sparse Matrices for Pytorch☆549Updated 4 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆447Updated last year
- Transformer training code for sequential tasks☆611Updated 4 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆336Updated 6 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 3 months ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Updated last year
- ☆219Updated 5 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆227Updated 3 years ago
- A LARS implementation in PyTorch☆352Updated 5 years ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆209Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- ☆165Updated 6 years ago
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆430Updated 4 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- Over9000 optimizer☆424Updated 2 years ago
- The Noise Contrastive Estimation for softmax output written in Pytorch☆319Updated 5 years ago
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆182Updated 3 years ago
- Implementation of Sparsemax activation in Pytorch☆163Updated 5 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago