cambridge-mlg / miracle
This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'
☆21Updated 6 years ago
Alternatives and similar repositories for miracle
Users that are interested in miracle are comparing it to the libraries listed below
Sorting:
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 9 months ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- Code release for Hoogeboom, Emiel, Jorn WT Peters, Rianne van den Berg, and Max Welling. "Integer Discrete Flows and Lossless Compression…☆97Updated 5 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆38Updated 3 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- ☆36Updated 3 years ago
- ☆83Updated 5 years ago
- TensorFlow implementation of "noisy K-FAC" and "noisy EK-FAC".☆60Updated 6 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆28Updated 6 years ago
- Implementation of Information Dropout☆39Updated 7 years ago
- Feasible target propagation code for the paper "Deep Learning as a Mixed Convex-Combinatorial Optimization Problem" by Friesen & Domingos…☆28Updated 7 years ago
- ☆42Updated 5 years ago
- ☆13Updated 6 years ago
- ☆71Updated 5 years ago
- Regularization, Neural Network Training Dynamics☆14Updated 5 years ago
- Learning to share: simultaneous parameter tying and sparsification in deep learning☆12Updated 6 years ago
- Limitations of the Empirical Fisher Approximation☆47Updated 2 months ago
- ☆58Updated 2 years ago
- Official code for ICML 2020 paper "Variational Bayesian Quantization"☆23Updated 2 years ago
- ☆27Updated 6 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 4 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- ☆53Updated 6 years ago
- ☆64Updated last year
- A tensorflow implementation of the NIPS 2018 paper "Variational Inference with Tail-adaptive f-Divergence"☆21Updated 6 years ago
- Monotone operator equilibrium networks☆52Updated 4 years ago
- ☆45Updated 5 years ago
- Code for "A Spectral Approach to Gradient Estimation for Implicit Distributions" (ICML'18)☆33Updated 2 years ago