btgraham / Batchwise-DropoutLinks

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.
15Updated 10 years ago

Alternatives and similar repositories for Batchwise-Dropout

Users that are interested in Batchwise-Dropout are comparing it to the libraries listed below

Sorting: