google-research / wide-sparse-nets
☆19Updated 4 years ago
Alternatives and similar repositories for wide-sparse-nets:
Users that are interested in wide-sparse-nets are comparing it to the libraries listed below
- ☆22Updated 6 years ago
- ☆36Updated 3 years ago
- ☆25Updated 4 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆48Updated 4 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆15Updated 3 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆29Updated 3 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- Implementation of Kronecker Attention in Pytorch☆18Updated 4 years ago
- ☆41Updated 2 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 6 years ago
- PyTorch implementation of HashedNets☆36Updated last year
- Implementation of "Structured Multi-Hashing for Model Compression" (CVPR 2020)☆11Updated 4 years ago
- Large-batch Training, Neural Network Optimization☆9Updated 5 years ago
- Collection of snippets for PyTorch users☆25Updated 3 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- ☆23Updated 6 years ago
- A simple Transformer where the softmax has been replaced with normalization☆19Updated 4 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING☆30Updated last week
- ☆11Updated 2 years ago
- This repository provides code source used in the paper: A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off☆13Updated 5 years ago
- ☆14Updated 4 years ago
- Spartan is an algorithm for training sparse neural network models. This repository accompanies the paper "Spartan Differentiable Sparsity…☆24Updated 2 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- Encodings for neural architecture search☆29Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago