HIPS / hypergrad
Exploring differentiation with respect to hyperparameters
☆297Updated 8 years ago
Related projects ⓘ
Alternatives and complementary repositories for hypergrad
- Variational and semi-supervised neural network toppings for Lasagne☆208Updated 8 years ago
- auto-tuning momentum SGD optimizer☆287Updated 5 years ago
- Python package for modular Bayesian optimization☆134Updated 3 years ago
- Flexible Bayesian inference using TensorFlow☆143Updated 7 years ago
- HPOlib is a hyperparameter optimization library. It provides a common interface to three state of the art hyperparameter optimization pac…☆166Updated 6 years ago
- Scikit-learn compatible tools using theano☆364Updated 7 years ago
- Generative Adversarial Networks with Keras☆156Updated 4 years ago
- Multi-GPU mini-framework for Theano☆195Updated 7 years ago
- Implementation in C and Theano of the method Probabilistic Backpropagation for scalable Bayesian inference in deep neural networks.☆192Updated 5 years ago
- Optimizers for machine learning☆180Updated last year
- 🏃 Implementation of Using Fast Weights to Attend to the Recent Past.☆268Updated 5 years ago
- Reference implementations of neural networks with differentiable memory mechanisms (NTM, Stack RNN, etc.)☆220Updated 8 years ago
- Code to train Importance Weighted Autoencoders on MNIST and OMNIGLOT☆204Updated 8 years ago
- AI•ON projects repository and website source.☆217Updated 6 years ago
- Keras implementation for "Deep Networks with Stochastic Depth" http://arxiv.org/abs/1603.09382☆139Updated 4 years ago
- Implements SFO minibatch optimizer in Python and MATLAB, and reproduces figures from paper.☆127Updated 3 years ago
- Deep Unsupervised Perceptual Grouping☆131Updated 4 years ago
- DeepArchitect: Automatically Designing and Training Deep Architectures☆144Updated 5 years ago
- An intelligent block matrix library for numpy, PyTorch, and beyond.☆300Updated 3 months ago
- Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)☆172Updated 8 years ago
- DrMAD☆108Updated 7 years ago
- Forward-mode Automatic Differentiation for TensorFlow☆140Updated 6 years ago
- We use a modified neural network instead of Gaussian process for Bayesian optimization.☆107Updated 7 years ago
- Modular & extensible deep learning framework built on Theano.☆210Updated last year
- Adversarial networks in TensorFlow☆170Updated 8 years ago
- Breze with all the stuff.☆96Updated 8 years ago
- Capsule network with variations. Originally proposed by Tieleman & Hinton : http://www.cs.toronto.edu/~tijmen/tijmen_thesis.pdf☆170Updated 7 years ago
- Track trending arXiv papers on Twitter from within your circle☆172Updated 8 years ago
- Neural network training using iterated projections.☆89Updated 7 years ago
- Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.☆160Updated 7 years ago