cambridge-mlg / miracleLinks
This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'
☆21Updated 6 years ago
Alternatives and similar repositories for miracle
Users that are interested in miracle are comparing it to the libraries listed below
Sorting:
- Demo: Slightly More Bio-Plausible Backprop☆21Updated 8 years ago
- Regularization, Neural Network Training Dynamics☆14Updated 5 years ago
- Code release for Hoogeboom, Emiel, Jorn WT Peters, Rianne van den Berg, and Max Welling. "Integer Discrete Flows and Lossless Compression…☆98Updated 5 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆39Updated 3 years ago
- ☆33Updated 6 years ago
- Feasible target propagation code for the paper "Deep Learning as a Mixed Convex-Combinatorial Optimization Problem" by Friesen & Domingos…☆28Updated 7 years ago
- Limitations of the Empirical Fisher Approximation☆47Updated 4 months ago
- ☆71Updated 5 years ago
- Compression with Flows via Local Bits-Back Coding☆39Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 11 months ago
- ☆42Updated 6 years ago
- ☆45Updated 5 years ago
- TensorFlow implementation of "noisy K-FAC" and "noisy EK-FAC".☆60Updated 6 years ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- ☆83Updated 5 years ago
- Scaled MMD GAN☆36Updated 5 years ago
- ICML 2019. Turn a pre-trained GAN model into a content-addressable model without retraining.☆22Updated 11 months ago
- ☆150Updated 2 years ago
- This repository provides code source used in the paper: A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off☆13Updated 6 years ago
- ☆52Updated 6 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆33Updated 4 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- PyTorch AutoNEB implementation to identify minimum energy paths, e.g. in neural network loss landscapes☆55Updated 2 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 5 years ago
- ☆36Updated 3 years ago
- ☆32Updated 6 years ago
- Lossless compression using Probabilistic Circuits☆16Updated 3 years ago
- Lua implementation of Entropy-SGD☆82Updated 7 years ago
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆64Updated 4 years ago