JiJingYu / delta_orthogonal_init_pytorchLinks
Delta Orthogonal Initialization for PyTorch
☆18Updated 7 years ago
Alternatives and similar repositories for delta_orthogonal_init_pytorch
Users that are interested in delta_orthogonal_init_pytorch are comparing it to the libraries listed below
Sorting:
- Code base for SRSGD.☆28Updated 5 years ago
- Code for "EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis" https://arxiv.org/abs/1905.05934☆113Updated 5 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 4 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- ☆39Updated 5 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning (ICLR 2020)☆33Updated 4 years ago
- [NeurIPS '18] "Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?" Official Implementation.☆129Updated 3 years ago
- An implementation of shampoo☆77Updated 7 years ago
- PyTorch code for training neural networks without global back-propagation☆165Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆152Updated 6 years ago
- Implementation of soft parameter sharing for neural networks☆69Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- This project is the Torch implementation of our accepted AAAI 2018 paper : orthogonal weight normalization method for solving orthogonali…☆57Updated 5 years ago
- Simple implementation of the LSUV initialization in PyTorch☆58Updated last year
- ☆23Updated 6 years ago
- Net2Net implementation on PyTorch for any possible vision layers.☆38Updated 7 years ago
- Implementation of the reversible residual network in pytorch☆105Updated 3 years ago
- ☆34Updated 6 years ago
- ☆70Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆67Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆22Updated 7 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- ☆45Updated 5 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- Cheap distillation for convolutional neural networks.