HIPS / hypergrad
Exploring differentiation with respect to hyperparameters
☆294Updated 9 years ago
Alternatives and similar repositories for hypergrad:
Users that are interested in hypergrad are comparing it to the libraries listed below
- Variational and semi-supervised neural network toppings for Lasagne☆208Updated 8 years ago
- HPOlib is a hyperparameter optimization library. It provides a common interface to three state of the art hyperparameter optimization pac…☆166Updated 6 years ago
- Python package for modular Bayesian optimization☆134Updated 4 years ago
- Scikit-learn compatible tools using theano☆363Updated 8 years ago
- Multi-GPU mini-framework for Theano☆195Updated 7 years ago
- auto-tuning momentum SGD optimizer☆286Updated 6 years ago
- Flexible Bayesian inference using TensorFlow☆143Updated 8 years ago
- Optimizers for machine learning☆183Updated last year
- DeepArchitect: Automatically Designing and Training Deep Architectures☆145Updated 5 years ago
- Code to train Importance Weighted Autoencoders on MNIST and OMNIGLOT☆206Updated 9 years ago
- Reference implementations of neural networks with differentiable memory mechanisms (NTM, Stack RNN, etc.)☆219Updated 9 years ago
- Implementation in C and Theano of the method Probabilistic Backpropagation for scalable Bayesian inference in deep neural networks.☆192Updated 6 years ago
- Stochastic Gradient Boosted Decision Trees as Standalone, TMVAPlugin and Python-Interface☆246Updated 4 years ago
- Deep Unsupervised Perceptual Grouping☆131Updated 4 years ago
- DrMAD☆107Updated 7 years ago
- Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)☆172Updated 8 years ago
- We use a modified neural network instead of Gaussian process for Bayesian optimization.☆108Updated 7 years ago
- Ladder network is a deep learning algorithm that combines supervised and unsupervised learning.☆242Updated 8 years ago
- Forward-mode Automatic Differentiation for TensorFlow☆139Updated 7 years ago
- Keras implementation for "Deep Networks with Stochastic Depth" http://arxiv.org/abs/1603.09382☆139Updated 4 years ago
- Generative Adversarial Networks with Keras☆156Updated 4 years ago
- Efficient implementation of Generative Stochastic Networks☆317Updated 8 years ago
- Benchmarks for several RNN variations with different deep-learning frameworks☆169Updated 5 years ago
- Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.☆159Updated 8 years ago
- Stochastic gradient routines for Theano☆102Updated 6 years ago
- Benchmark testbed for assessing the performance of optimisation algorithms☆80Updated 10 years ago
- Fully differentiable deep-neural decision forest in tensorflow☆229Updated 7 years ago
- Bringing up some extra Cosmo to Keras.☆378Updated 8 years ago
- Neural network training using iterated projections.☆89Updated 8 years ago
- Capsule network with variations. Originally proposed by Tieleman & Hinton : http://www.cs.toronto.edu/~tijmen/tijmen_thesis.pdf☆170Updated 7 years ago