owruby / nelder_mead
A Python easy implementation of the Nelder-Mead method
☆18Updated 7 years ago
Alternatives and similar repositories for nelder_mead:
Users that are interested in nelder_mead are comparing it to the libraries listed below
- noiseless/nonnegative sparse recovery and feature retrieval via compressed sensing☆36Updated 6 years ago
- Presentations of the advanced topics in optimization☆11Updated 5 years ago
- model uncertainty using mc dropout☆20Updated 6 years ago
- An implementation of DropConnect Layer in Keras☆36Updated 5 years ago
- The Great Autoencoder Bake Off☆68Updated 4 years ago
- Efficient Deep Learning Survey Paper☆33Updated 2 years ago
- Sparse dictionary learning☆29Updated 11 years ago
- Notebooks for IPAM Tutorial, March 15 2019☆24Updated 6 years ago
- Benchmark Suite for Stochastic Gradient Descent Optimization Algorithms in Pytorch☆16Updated 2 years ago
- Keras implementation of temporal ensembling(semi-supervised learning)☆22Updated 6 years ago
- Some improvements on Adam☆28Updated 4 years ago
- Random Mesh Projectors for Inverse Problems☆24Updated 4 years ago
- LIBS2ML: A Library for Scalable Second Order Machine Learning Algorithms☆10Updated 3 years ago
- 📝 Papers I read and notes/reviews I made. Also useful links to courses (RL/NLP/Bio/QC/DevOps)☆10Updated 3 years ago
- Code of Fair PCA algorithm introduced in the paper "The Price of Fair PCA: One Extra Dimension"☆27Updated 2 years ago
- Code for our paper: "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers".☆21Updated 3 years ago
- Learning to rank using gradient descent☆13Updated 11 years ago
- ☆9Updated 4 years ago
- Adversarial Lipschitz Regularization☆10Updated 3 years ago
- Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent☆45Updated 4 years ago
- Code for the paper "Let’s Make Block Coordinate Descent Go Fast"☆48Updated last year
- Wavelet Neural Network implementation☆7Updated 9 years ago
- Keras implementation of Padam from "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks"☆17Updated 6 years ago
- Keras implementation of Maximum Entropy Markov Model☆9Updated 5 years ago
- ☆24Updated 5 years ago
- Minimal implementation of a radial basis function network.☆75Updated 5 years ago
- Parallel Solver for Large-Scale Sparse Matrix Computations☆17Updated 5 years ago
- Implementation of Adam Optimization algorithm using Numpy☆20Updated 5 years ago
- A general method for training cost-sensitive robust classifier☆22Updated 5 years ago
- A disciplined approach to neural network parameters - Reviewing the approach for setting Hyper parameters by Leslie Smith☆12Updated 6 years ago