google-research / wide-sparse-netsLinks
☆19Updated 4 years ago
Alternatives and similar repositories for wide-sparse-nets
Users that are interested in wide-sparse-nets are comparing it to the libraries listed below
Sorting:
- Code base for SRSGD.☆28Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- ☆25Updated 5 years ago
- Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."☆25Updated 5 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- Local search for NAS☆18Updated 5 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- ☆47Updated 5 years ago
- ☆22Updated 7 years ago
- ☆83Updated 6 years ago
- CIFAR-5m dataset☆40Updated 5 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 5 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- ☆38Updated 2 years ago
- ☆37Updated 4 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- Code for "Supermasks in Superposition"☆125Updated 2 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 7 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆49Updated 4 years ago
- ☆58Updated 2 years ago
- ☆59Updated 5 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 6 years ago
- ☆42Updated 2 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆47Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago