google / bi-tempered-lossLinks
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf
☆147Updated 3 years ago
Alternatives and similar repositories for bi-tempered-loss
Users that are interested in bi-tempered-loss are comparing it to the libraries listed below
Sorting:
- Mish Deep Learning Activation Function for PyTorch / FastAI☆161Updated 5 years ago
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆195Updated 3 years ago
- Repo to build on / reproduce the record breaking Ranger-Mish-SelfAttention setup on FastAI ImageWoof dataset 5 epochs☆116Updated 5 years ago
- A simpler version of the self-attention layer from SAGAN, and some image classification results.☆214Updated 6 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- A pytorch dataset sampler for always sampling balanced batches.☆118Updated 4 years ago
- Unofficial PyTorch Implementation of EvoNorm☆123Updated 4 years ago
- Experiments with Adam/AdamW/amsgrad☆201Updated 7 years ago
- Utilities for Pytorch☆88Updated 3 years ago
- Implementation for the Lookahead Optimizer.☆243Updated 3 years ago
- homura is a library for fast prototyping DL research☆106Updated 3 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last year
- Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers☆168Updated 3 years ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆152Updated 6 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆337Updated 6 years ago
- Over9000 optimizer☆424Updated 2 years ago
- Data Reading Blocks for Python☆104Updated 4 years ago
- A specially designed light version of Fast AutoAugment☆171Updated 5 years ago
- Decoupled Weight Decay Regularization (ICLR 2019)☆283Updated 6 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- Complementary code for the Targeted Dropout paper☆255Updated 6 years ago
- Semi-supervised ImageNet1K models☆245Updated 6 years ago
- Smooth Loss Functions for Deep Top-k Classification☆258Updated 4 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆188Updated last year
- Unsupervised Data Augmentation experiments in PyTorch☆59Updated 6 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆93Updated 2 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆217Updated 4 years ago
- Implementation of soft parameter sharing for neural networks☆70Updated 4 years ago
- High-level batteries-included neural network training library for Pytorch☆403Updated 3 years ago