btgraham / Batchwise-DropoutLinks
Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.
☆15Updated 10 years ago
Alternatives and similar repositories for Batchwise-Dropout
Users that are interested in Batchwise-Dropout are comparing it to the libraries listed below
Sorting:
- On Certifying Non-uniform Bounds against Adversarial Attacks [ICML 2019]☆6Updated 5 years ago
- ☆14Updated 5 years ago
- Slides for my graduate deep learning course at Imperial College.☆6Updated 5 years ago
- ☆6Updated 5 years ago
- A lightweight experimental logging library☆50Updated 10 months ago
- A lightweight implementation of the SimCLR unsupervised training framework in PyTorch.☆8Updated 4 years ago
- Code for the CVPR 2020 [ORAL] paper "SAM: The Sensitivity of Attribution Methods to Hyperparameters"☆27Updated 2 years ago
- A easy to use API to store outputs from forward/backward hooks in Pytorch☆36Updated 5 years ago
- Semi-supervised learning via Compact Latent Space Clustering☆49Updated 5 years ago
- CE9010 Introduction to Data Analysis, 2020☆7Updated 5 years ago
- ☆90Updated 5 years ago
- A library for evaluating representations.☆76Updated 3 years ago
- Visualizing how deep networks make decisions☆66Updated 5 years ago
- [CVPR 2017] AMT chat interface code used to collect the Visual Dialog dataset☆79Updated 2 years ago
- There and Back Again: Revisiting Backpropagation Saliency Methods (CVPR 2020)☆52Updated 5 years ago
- ☆123Updated 6 years ago
- ☆6Updated 2 years ago
- Materials for the practical sessions at EEML2019☆80Updated 4 years ago
- Exploring Random Encoders for Sentence Classification☆183Updated 5 years ago
- Deep learning model to identify a scene/character from 10 different animated movies☆28Updated 5 years ago
- ☆72Updated 5 years ago
- ☆13Updated 4 years ago
- PixelVAE with or without regularization☆66Updated 7 years ago
- Original PyTorch implementation of the Leap meta-learner (https://arxiv.org/abs/1812.01054) along with code for running the Omniglot expe…☆148Updated 2 years ago
- Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling☆91Updated 7 years ago
- Columbia Advanced Machine Learning Seminar☆24Updated 6 years ago
- NLE practical session for PAISS 2018☆16Updated 6 years ago
- Implementation of paper "GibbsNet: Iterative Adversarial Inference for Deep Graphical Models" in PyTorch☆57Updated 7 years ago
- Code for Attentive Recurrent Comparators☆57Updated 8 years ago
- Code for paper "L4: Practical loss-based stepsize adaptation for deep learning"☆124Updated 6 years ago