google-research / wide-sparse-netsLinks
☆19Updated 4 years ago
Alternatives and similar repositories for wide-sparse-nets
Users that are interested in wide-sparse-nets are comparing it to the libraries listed below
Sorting:
- Code base for SRSGD.☆28Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 7 years ago
- ☆25Updated 5 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 6 years ago
- Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."☆25Updated 5 years ago
- This repository provides code source used in the paper: A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off☆13Updated 6 years ago
- ☆19Updated 3 years ago
- ☆37Updated 2 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆32Updated 5 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- ☆37Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- PyTorch implementation of HashedNets☆37Updated 2 years ago
- ☆47Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- ☆69Updated 5 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- Experiments with the ideas presented in https://arxiv.org/abs/2003.00152 by Frankle et al.☆29Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆22Updated 7 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- ☆41Updated 4 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆34Updated 5 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆79Updated 5 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 5 years ago