HIPS / hypergrad
Exploring differentiation with respect to hyperparameters
☆295Updated 9 years ago
Alternatives and similar repositories for hypergrad:
Users that are interested in hypergrad are comparing it to the libraries listed below
- Variational and semi-supervised neural network toppings for Lasagne☆208Updated 8 years ago
- HPOlib is a hyperparameter optimization library. It provides a common interface to three state of the art hyperparameter optimization pac…☆166Updated 6 years ago
- auto-tuning momentum SGD optimizer☆287Updated 6 years ago
- Optimizers for machine learning☆183Updated last year
- Python package for modular Bayesian optimization☆135Updated 4 years ago
- Scikit-learn compatible tools using theano☆363Updated 8 years ago
- Keras implementation for "Deep Networks with Stochastic Depth" http://arxiv.org/abs/1603.09382☆139Updated 4 years ago
- Multi-GPU mini-framework for Theano☆195Updated 7 years ago
- Flexible Bayesian inference using TensorFlow☆143Updated 8 years ago
- Generative Adversarial Networks with Keras☆156Updated 4 years ago
- Reference implementations of neural networks with differentiable memory mechanisms (NTM, Stack RNN, etc.)☆219Updated 9 years ago
- Forward-mode Automatic Differentiation for TensorFlow☆139Updated 7 years ago
- Modular Restricted Boltzmann Machine (RBM) implementation using Theano☆173Updated 12 years ago
- Implementation in C and Theano of the method Probabilistic Backpropagation for scalable Bayesian inference in deep neural networks.☆192Updated 6 years ago
- Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.☆159Updated 8 years ago
- Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)☆172Updated 8 years ago
- Efficient implementation of Generative Stochastic Networks☆317Updated 9 years ago
- Benchmarks for several RNN variations with different deep-learning frameworks☆169Updated 5 years ago
- DeepArchitect: Automatically Designing and Training Deep Architectures☆145Updated 5 years ago
- DrMAD☆107Updated 7 years ago
- Code for paper "L4: Practical loss-based stepsize adaptation for deep learning"☆125Updated 6 years ago
- Deep Unsupervised Perceptual Grouping☆131Updated 4 years ago
- Code to train Importance Weighted Autoencoders on MNIST and OMNIGLOT☆206Updated 9 years ago
- Adversarial networks in TensorFlow☆170Updated 8 years ago
- InfiniteBoost: building infinite ensembles with gradient descent☆184Updated 6 years ago
- AI•ON projects repository and website source.☆217Updated 7 years ago
- ☆88Updated 10 years ago
- We use a modified neural network instead of Gaussian process for Bayesian optimization.☆108Updated 7 years ago
- Stochastic gradient routines for Theano☆102Updated 6 years ago
- 🏃 Implementation of Using Fast Weights to Attend to the Recent Past.☆268Updated 6 years ago