yuanwei2019 / EAdam-optimizerLinks
Some improvements on Adam
☆28Updated 4 years ago
Alternatives and similar repositories for EAdam-optimizer
Users that are interested in EAdam-optimizer are comparing it to the libraries listed below
Sorting:
- PyTorch Codes for Haar Graph Pooling☆11Updated 2 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 5 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆24Updated 5 years ago
- ☆33Updated 4 years ago
- PyTorch implementation of FNet: Mixing Tokens with Fourier transforms☆27Updated 4 years ago
- NeurIPS 2022: Tree Mover’s Distance: Bridging Graph Metrics and Stability of Graph Neural Networks☆37Updated 2 years ago
- Transformers are Graph Neural Networks!☆54Updated 4 years ago
- [NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation☆41Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated 2 years ago
- ☆10Updated 2 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆16Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- ☆9Updated 4 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated last year
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- AAAI 2021: Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels☆23Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Code for ICML 2020 paper: Do RNN and LSTM have Long Memory?☆17Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Repr…☆22Updated 3 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆69Updated 2 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆78Updated 4 years ago
- ☆95Updated 2 years ago
- ☆42Updated 5 years ago
- TedNet: A Pytorch Toolkit for Tensor Decomposition Networks☆97Updated 3 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆61Updated last year
- Stochastic Weight Averaging Tutorials using pytorch.☆33Updated 4 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆57Updated last year