slowbull / DDG
A PyTorch implementation of the paper "Decoupled Parallel Backpropagation with Convergence Guarantee"
☆30Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for DDG
- ☆29Updated 4 years ago
- SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning☆23Updated 6 years ago
- ☆74Updated 5 years ago
- Net2Net implementation on PyTorch for any possible vision layers.☆38Updated 7 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Reproduction and analysis of SNIP paper☆29Updated 4 years ago
- ☆82Updated 4 years ago
- Compressing Neural Networks using the Variational Information Bottleneck☆64Updated 2 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 5 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆101Updated 4 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆86Updated 3 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- Code for "EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis" https://arxiv.org/abs/1905.05934☆112Updated 4 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆61Updated 3 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆46Updated 3 years ago
- This repository is no longer maintained. Check☆82Updated 4 years ago
- PyTorch code for training neural networks without global back-propagation☆162Updated 5 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 4 years ago
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning (ICLR 2020)☆33Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆56Updated 6 years ago
- ☆143Updated last year
- ☆70Updated 4 years ago
- ☆61Updated last year
- ☆26Updated 5 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 2 years ago
- Code for "On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length", ICLR 2019☆11Updated last year
- ☆38Updated 4 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆64Updated 2 years ago
- Lua implementation of Entropy-SGD☆81Updated 6 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆44Updated 4 years ago