XuezheMax / apolloLinks
Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization
☆183Updated 3 years ago
Alternatives and similar repositories for apollo
Users that are interested in apollo are comparing it to the libraries listed below
Sorting:
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- Accelerate training by storing parameters in one contiguous chunk of memory.☆291Updated 4 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆336Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆190Updated 2 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆88Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆408Updated 10 months ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 10 months ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Pytorch Implementation of Neural Architecture Optimization☆113Updated 4 years ago
- ☆84Updated 4 years ago
- ☆131Updated 5 years ago
- ☆182Updated 2 years ago
- Pytorch implementation of TRP☆45Updated 4 years ago
- Implementation of Sparsemax activation in Pytorch☆160Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆153Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- pytorch gpu memory check☆50Updated 6 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆164Updated 4 years ago
- Loss and accuracy go opposite ways...right?☆93Updated 5 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆375Updated 4 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 5 years ago
- ☆165Updated 6 years ago
- This in my Demo of Chen et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks" ICML 2018☆178Updated 3 years ago
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆98Updated 3 years ago
- Package of Optimizer implemented with PyTorch .☆65Updated 5 years ago
- Implementation of the reversible residual network in pytorch☆104Updated 3 years ago