tuantle / regression-losses-pytorchLinks
Experimenting with different regression losses. Implemented in Pytorch.
☆147Updated 6 years ago
Alternatives and similar repositories for regression-losses-pytorch
Users that are interested in regression-losses-pytorch are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of some learning rate schedulers for deep learning researcher.☆90Updated 2 years ago
- Custom loss functions to use in (mainly) PyTorch.☆39Updated 4 years ago
- Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning, ICLR 2020☆235Updated 2 years ago
- Collection of PyTorch Lightning implementations of Generative Adversarial Network varieties presented in research papers.☆168Updated 2 months ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆160Updated last year
- This Module implements Spatial Pyramid Pooling (SPP) and Temporal Pyramid Pooling (TPP) as described in different papers.☆53Updated 5 years ago
- A brief tutorial on the Wasserstein auto-encoder☆83Updated 6 years ago
- Loss function which directly targets ROC-AUC☆246Updated 2 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆139Updated 4 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆215Updated 4 years ago
- A pytorch dataset sampler for always sampling balanced batches.☆115Updated 4 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆121Updated 2 years ago
- Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neu…☆149Updated 5 years ago
- A fully invertible U-Net for memory efficiency in Pytorch.☆126Updated 2 years ago
- Ranger deep learning optimizer rewrite to use newest components☆329Updated last year
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆135Updated 2 months ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆92Updated 2 years ago
- This in my Demo of Chen et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks" ICML 2018☆178Updated 3 years ago
- Tool box for PyTorch☆188Updated last month
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆429Updated 8 months ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆209Updated last year
- Mutual Information in Pytorch☆112Updated last year
- Traditional Machine Learning Models for Large-Scale Datasets in PyTorch.☆126Updated this week
- Adaptive Gradient Clipping☆133Updated 2 years ago
- Neural Spline Flow, RealNVP, Autoregressive Flow, 1x1Conv in PyTorch.☆279Updated last year
- Learning Rate Warmup in PyTorch☆410Updated 2 months ago
- PyTorch Boilerplate For Research☆609Updated 3 years ago
- Multi-Task Learning Framework on PyTorch. State-of-the-art methods are implemented to effectively train models on multiple tasks.☆149Updated 6 years ago
- Pytorch Optimizer for Simulated Annealing☆27Updated 7 years ago