YanaiEliyahu / AdasOptimizerLinks
ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
☆85Updated 4 years ago
Alternatives and similar repositories for AdasOptimizer
Users that are interested in AdasOptimizer are comparing it to the libraries listed below
Sorting:
- Deep Learning project template best practices with Pytorch Lightning, Hydra, Tensorboard.☆159Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Zalo AI Challenge 2020 - Top 2 @ Voice Verification☆15Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Auto-Magical Deploy AI model at large scale, high performance, and easy to use☆66Updated last year
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆128Updated last year
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".…☆30Updated 4 years ago
- Create SSH tunel to a running colab notebook☆67Updated 3 years ago
- Light Face Detection using PyTorch Lightning☆84Updated last year
- Knowledge Distillation Toolkit☆88Updated 5 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆188Updated last year
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆215Updated 4 years ago
- TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.☆87Updated 3 years ago
- ☆14Updated 4 years ago
- graftr: an interactive shell to view and edit PyTorch checkpoints.☆113Updated 4 years ago
- Minimal implementation of adaptive gradient clipping (https://arxiv.org/abs/2102.06171) in TensorFlow 2.☆84Updated 4 years ago
- Lite Inference Toolkit (LIT) for PyTorch☆161Updated 3 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆180Updated 3 years ago
- a mini Deep Learning framework supporting GPU accelerations written with CUDA☆32Updated 4 years ago
- Keras style progressbar for Pytorch (PK Bar)☆32Updated last year
- Large dataset storage format for Pytorch☆45Updated 3 years ago
- PyTorch dataset extended with map, cache etc. (tensorflow.data like)☆329Updated 3 years ago
- Gradient Accumulation for TensorFlow 2☆53Updated last year
- ☆18Updated 2 years ago
- Learning to Initialize Neural Networks for Stable and Efficient Training☆139Updated 3 years ago
- Official code for the Stochastic Polyak step-size optimizer☆139Updated last year
- ☆47Updated 4 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- Configuration classes enabling Hydra to configure and manage Pytorch Lightning projects.☆42Updated 4 years ago
- End-to-End Vietnamese Speech Recognition using wav2vec 2.0☆99Updated 3 years ago