btgraham / Batchwise-Dropout

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.
15Updated 9 years ago

Related projects

Alternatives and complementary repositories for Batchwise-Dropout