cambridge-mlg / miracleLinks
This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'
☆21Updated 6 years ago
Alternatives and similar repositories for miracle
Users that are interested in miracle are comparing it to the libraries listed below
Sorting:
- Regularization, Neural Network Training Dynamics☆14Updated 5 years ago
- ☆42Updated 6 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- ☆33Updated 6 years ago
- Monotone operator equilibrium networks☆52Updated 4 years ago
- Code release for Hoogeboom, Emiel, Jorn WT Peters, Rianne van den Berg, and Max Welling. "Integer Discrete Flows and Lossless Compression…☆97Updated 5 years ago
- Limitations of the Empirical Fisher Approximation☆47Updated 3 months ago
- ☆45Updated 5 years ago
- Implementation of Information Dropout☆39Updated 7 years ago
- TensorFlow implementation of "noisy K-FAC" and "noisy EK-FAC".☆60Updated 6 years ago
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 8 years ago
- Feasible target propagation code for the paper "Deep Learning as a Mixed Convex-Combinatorial Optimization Problem" by Friesen & Domingos…☆28Updated 7 years ago
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆64Updated 4 years ago
- ☆27Updated 6 years ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- Pytorch implementation of 'Semi-Implicit Methods for Deep Neural Networks'☆24Updated 6 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆36Updated 3 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 4 years ago
- Learning to share: simultaneous parameter tying and sparsification in deep learning☆12Updated 6 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 4 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆39Updated 3 years ago
- This repository provides code source used in the paper: A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off☆13Updated 6 years ago
- The Singular Values of Convolutional Layers☆72Updated 6 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- ☆71Updated 5 years ago
- Scaled MMD GAN☆36Updated 5 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 10 months ago
- ☆58Updated 2 years ago
- ☆83Updated 5 years ago