Hyperparameter optimization with approximate gradient
☆66Mar 23, 2021Updated 5 years ago
Alternatives and similar repositories for hoag
Users that are interested in hoag are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Example Code for paper "Provably Faster Algorithms for Bilevel Optimization"☆15Dec 28, 2021Updated 4 years ago
- Exploring differentiation with respect to hyperparameters☆297Jan 15, 2016Updated 10 years ago
- Code Repository for ICML 2020 accepted paper, named "A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Le…☆12Jan 4, 2022Updated 4 years ago
- ☆69Dec 19, 2018Updated 7 years ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆190Mar 24, 2020Updated 6 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Stochastic Gradient Boosted Decision Trees as Standalone, TMVAPlugin and Python-Interface☆248Jul 19, 2020Updated 5 years ago
- DrMAD☆107Nov 12, 2017Updated 8 years ago
- Ordered Weighted L1 regularization for classification and regression in Python☆52Aug 27, 2018Updated 7 years ago
- simple example of gradient-based hyperparameter optimization using tensorflow☆19Feb 29, 2016Updated 10 years ago
- Echo Noise Channel for Exact Mutual Information Calculation☆17Jul 17, 2020Updated 5 years ago
- Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps☆42Mar 17, 2019Updated 7 years ago
- ByteNet for character-level language modelling☆318Aug 23, 2017Updated 8 years ago
- Dask and Spark interactions☆21Mar 13, 2017Updated 9 years ago
- Fastidious accounting of entropy streams into and out of optimization and sampling algorithms.☆33Apr 19, 2016Updated 9 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code for Kernel Adaptive Metropolis-Hastings☆33Jan 4, 2015Updated 11 years ago
- Code and models from the paper "Layer Normalization"☆243Nov 8, 2016Updated 9 years ago
- Hyper-parameter Optimization with DrMAD and Hypero☆23Jun 9, 2016Updated 9 years ago
- Torch implementation reproducing MNIST experiments from DeepMind's DNI paper.☆44Mar 4, 2017Updated 9 years ago
- Example code for paper "Bilevel Optimization: Nonasymptotic Analysis and Faster Algorithms"☆49Dec 28, 2021Updated 4 years ago
- R package for sparse VAR estimation☆13Feb 5, 2026Updated 2 months ago
- RNN with differentiable structure (number of neurons)☆22Aug 1, 2016Updated 9 years ago
- This is the official repo for the experiments in the paper "Bilevel Programming for Hyperparameter Optimization and Meta-Learning"☆29Jun 7, 2018Updated 7 years ago
- a multi-threaded, multi-GPU Waffle web server☆12Apr 12, 2016Updated 10 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Active Imitation Learing with Noisy Guidance☆10May 29, 2020Updated 5 years ago
- Python package to fetch data from the LIBSVM website.☆20Feb 25, 2026Updated last month
- Cyclades☆28Apr 7, 2018Updated 8 years ago
- Gradient-based hyperparameter optimization package with TensorFlow☆51Jul 13, 2019Updated 6 years ago
- ☆29Jan 25, 2018Updated 8 years ago
- Weight initialisation schemes for Torch7 neural network modules☆100Jun 21, 2017Updated 8 years ago
- Online Classification Library☆15Jun 9, 2013Updated 12 years ago
- Variational and semi-supervised neural network toppings for Lasagne☆210Aug 25, 2016Updated 9 years ago
- Code for the Torch in Action book☆44Jan 24, 2017Updated 9 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆27Jan 15, 2026Updated 3 months ago
- Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)☆174Nov 3, 2016Updated 9 years ago
- Deploy Dask on Marathon☆10Feb 6, 2017Updated 9 years ago
- ☆30Sep 13, 2020Updated 5 years ago
- Theano implementation of T1-T2 gradient-based method for tuning continuous hyperparameters.☆10Jun 20, 2016Updated 9 years ago
- A short paper describing the library is available on arXiv.☆64Jan 5, 2018Updated 8 years ago
- ☆11Apr 30, 2016Updated 9 years ago