YanaiEliyahu / AdasOptimizerLinks
ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
☆85Updated 4 years ago
Alternatives and similar repositories for AdasOptimizer
Users that are interested in AdasOptimizer are comparing it to the libraries listed below
Sorting:
- a mini Deep Learning framework supporting GPU accelerations written with CUDA☆32Updated 4 years ago
- Minimal implementation of adaptive gradient clipping (https://arxiv.org/abs/2102.06171) in TensorFlow 2.☆85Updated 4 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".…☆30Updated 4 years ago
- Lite Inference Toolkit (LIT) for PyTorch☆161Updated 3 years ago
- TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.☆87Updated 3 years ago
- graftr: an interactive shell to view and edit PyTorch checkpoints.☆113Updated 4 years ago
- Deep Learning project template best practices with Pytorch Lightning, Hydra, Tensorboard.☆159Updated 4 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆216Updated 4 years ago
- Official code for the Stochastic Polyak step-size optimizer☆139Updated last year
- Light Face Detection using PyTorch Lightning☆83Updated last year
- Auto-Magical Deploy AI model at large scale, high performance, and easy to use☆66Updated 2 years ago
- PyTorch dataset extended with map, cache etc. (tensorflow.data like)☆330Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Implements sharpness-aware minimization (https://arxiv.org/abs/2010.01412) in TensorFlow 2.☆60Updated 3 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".☆159Updated 4 years ago
- Zalo AI Challenge 2020 - Top 2 @ Voice Verification☆15Updated 2 years ago
- Implementation of modern data augmentation techniques in TensorFlow 2.x to be used in your training pipeline.☆34Updated 5 years ago
- Ranger deep learning optimizer rewrite to use newest components☆333Updated last year
- Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping☆70Updated 4 years ago
- ☆114Updated 4 years ago
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆128Updated last year
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Unofficial PyTorch Implementation of EvoNorm☆122Updated 3 years ago
- Learning to Initialize Neural Networks for Stable and Efficient Training☆139Updated 3 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆347Updated last year
- 1st place solution for SIIM-FISABIO-RSNA COVID-19 Detection Challenge☆176Updated 3 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆239Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Catalyst.Segmentation☆28Updated 3 years ago