google / bi-tempered-lossLinks
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf
☆147Updated 3 years ago
Alternatives and similar repositories for bi-tempered-loss
Users that are interested in bi-tempered-loss are comparing it to the libraries listed below
Sorting:
- Mish Deep Learning Activation Function for PyTorch / FastAI☆161Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆194Updated 3 years ago
- A simpler version of the self-attention layer from SAGAN, and some image classification results.☆212Updated 5 years ago
- Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers☆169Updated 3 years ago
- Repo to build on / reproduce the record breaking Ranger-Mish-SelfAttention setup on FastAI ImageWoof dataset 5 epochs☆116Updated 5 years ago
- A pytorch dataset sampler for always sampling balanced batches.☆116Updated 4 years ago
- Implementation for the Lookahead Optimizer.☆241Updated 3 years ago
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- Unofficial PyTorch Implementation of EvoNorm☆122Updated 4 years ago
- homura is a library for fast prototyping DL research☆106Updated 3 years ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆151Updated 6 years ago
- Over9000 optimizer☆425Updated 2 years ago
- Experiments with Adam/AdamW/amsgrad☆201Updated 6 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆188Updated last year
- Data Reading Blocks for Python☆105Updated 4 years ago
- Utilities for Pytorch☆88Updated 2 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆337Updated 6 years ago
- Decoupled Weight Decay Regularization (ICLR 2019)☆277Updated 6 years ago
- Simple Tensorflow implementation of "Adaptive Gradient Methods with Dynamic Bound of Learning Rate" (ICLR 2019)☆150Updated 6 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆61Updated last year
- PyTorch functions and utilities to make your life easier☆194Updated 4 years ago
- Unsupervised Data Augmentation experiments in PyTorch☆60Updated 6 years ago
- Simple Tensorflow implementation of "On The Variance Of The Adaptive Learning Rate And Beyond"☆97Updated 5 years ago
- Example code showing how to use Nvidia DALI in pytorch, with fallback to torchvision. Contains a few differences to the official Nvidia …☆197Updated 5 years ago
- Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch.☆110Updated last week
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- Using the CLR algorithm for training (https://arxiv.org/abs/1506.01186)☆108Updated 7 years ago
- A specially designed light version of Fast AutoAugment☆171Updated 5 years ago
- Complementary code for the Targeted Dropout paper☆254Updated 5 years ago