google / bi-tempered-lossLinks
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf
☆147Updated 3 years ago
Alternatives and similar repositories for bi-tempered-loss
Users that are interested in bi-tempered-loss are comparing it to the libraries listed below
Sorting:
- Mish Deep Learning Activation Function for PyTorch / FastAI☆161Updated 5 years ago
- Implementation for the Lookahead Optimizer.☆243Updated 3 years ago
- A simpler version of the self-attention layer from SAGAN, and some image classification results.☆214Updated 6 years ago
- Repo to build on / reproduce the record breaking Ranger-Mish-SelfAttention setup on FastAI ImageWoof dataset 5 epochs☆116Updated 5 years ago
- pytorch implement of Lookahead Optimizer☆194Updated 3 years ago
- Utilities for Pytorch☆88Updated 2 years ago
- Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers☆168Updated 3 years ago
- Implementations of ideas from recent papers☆392Updated 4 years ago
- Over9000 optimizer☆424Updated 2 years ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆152Updated 6 years ago
- Implementation and experiments for AdamW on Pytorch☆94Updated 5 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆336Updated 6 years ago
- homura is a library for fast prototyping DL research☆106Updated 3 years ago
- A pytorch dataset sampler for always sampling balanced batches.☆117Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last year
- Decoupled Weight Decay Regularization (ICLR 2019)☆281Updated 6 years ago
- Unofficial PyTorch Implementation of EvoNorm☆122Updated 4 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- A specially designed light version of Fast AutoAugment☆171Updated 5 years ago
- Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch.☆110Updated last month
- Using the CLR algorithm for training (https://arxiv.org/abs/1506.01186)☆108Updated 7 years ago
- Data Reading Blocks for Python☆105Updated 4 years ago
- Experiments with Adam/AdamW/amsgrad☆201Updated 7 years ago
- Smooth Loss Functions for Deep Top-k Classification☆257Updated 4 years ago
- Unsupervised Data Augmentation experiments in PyTorch☆59Updated 6 years ago
- Implements the unsupervised pre-training of convolutional neural networks☆252Updated 4 years ago
- An implementation of "mixup: Beyond Empirical Risk Minimization"☆286Updated 7 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆187Updated last year
- Multi-Task Learning Framework on PyTorch. State-of-the-art methods are implemented to effectively train models on multiple tasks.☆149Updated 6 years ago
- A TensorFlow re-implementation of Momentum Contrast (MoCo): https://arxiv.org/abs/1911.05722☆159Updated 2 years ago