XuezheMax / apollo
Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization
☆181Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for apollo
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆407Updated 3 months ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆334Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆188Updated 2 years ago
- Implementation and experiments for AdamW on Pytorch☆93Updated 4 years ago
- Implementation of Sparsemax activation in Pytorch☆156Updated 4 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆98Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆162Updated 3 years ago
- The implementation of "Self-Supervised Generalisation with Meta Auxiliary Learning" [NeurIPS 2019].☆170Updated 2 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆253Updated 3 years ago
- Loss and accuracy go opposite ways...right?☆90Updated 4 years ago
- Accelerate training by storing parameters in one contiguous chunk of memory.☆291Updated 4 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆145Updated 5 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆369Updated 3 years ago
- Learning Sparse Neural Networks through L0 regularization☆239Updated 4 years ago
- ☆80Updated 3 years ago
- Papers for normalization techniques, released codes collections.☆225Updated 4 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 3 years ago
- A New Optimization Technique for Deep Neural Networks☆533Updated 2 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 3 months ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆150Updated last year
- A Re-implementation of Fixed-update Initialization☆151Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- This in my Demo of Chen et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks" ICML 2018☆169Updated 3 years ago
- ☆182Updated last year
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated last year
- Deep Isometric Learning for Visual Recognition (ICML 2020)☆143Updated 2 years ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆150Updated 5 years ago
- ☆165Updated 5 years ago