universome / loss-patternsLinks
Loss Patterns of Neural Networks
☆85Updated 3 years ago
Alternatives and similar repositories for loss-patterns
Users that are interested in loss-patterns are comparing it to the libraries listed below
Sorting:
- A lightweight library for tensorflow 2.0☆66Updated 5 years ago
- Pytorch implementation of Variational Dropout Sparsifies Deep Neural Networks☆83Updated 3 years ago
- Implements stochastic line search☆118Updated 2 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- 🧀 Pytorch code for the Fromage optimiser.☆125Updated last year
- Differentiable bitonic sorting☆141Updated 5 years ago
- Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch.☆110Updated 7 months ago
- ☆61Updated 2 years ago
- Basic experiment framework for tensorflow.☆91Updated 4 years ago
- PyTorch functions and utilities to make your life easier☆195Updated 4 years ago
- Code for paper: "Support Vector Machines, Wasserstein's distance and gradient-penalty GANs maximize a margin"☆178Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Code for: Implicit Competitive Regularization in GANs☆114Updated 3 years ago
- TBA☆76Updated 6 years ago
- Notes from NeurIPS 2019☆29Updated 5 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆24Updated 4 years ago
- Code for Neural Arithmetic Units (ICLR) and Measuring Arithmetic Extrapolation Performance (SEDL|NeurIPS)☆146Updated 3 years ago
- ☆45Updated 5 years ago
- Original PyTorch implementation of the Leap meta-learner (https://arxiv.org/abs/1812.01054) along with code for running the Omniglot expe…☆148Updated 2 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 5 years ago
- A discrete sequential VAE☆40Updated 5 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆53Updated 6 years ago
- Implementations of quasi-hyperbolic optimization algorithms.☆102Updated 5 years ago
- Code for MSID, a Multi-Scale Intrinsic Distance for comparing generative models, studying neural networks, and more!☆51Updated 6 years ago
- Official code for the Stochastic Polyak step-size optimizer☆139Updated last year
- Research boilerplate for PyTorch.☆149Updated 2 years ago
- An implementation of shampoo☆77Updated 7 years ago
- Probabilistic classification in PyTorch/TensorFlow/scikit-learn with Fenchel-Young losses☆186Updated last year
- Equi-normalization of Neural Networks☆115Updated 6 years ago
- ☆32Updated 7 years ago