minhtannguyen / SRSGDLinks
Code base for SRSGD.
☆29Updated 5 years ago
Alternatives and similar repositories for SRSGD
Users that are interested in SRSGD are comparing it to the libraries listed below
Sorting:
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 5 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 7 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- ☆39Updated 5 years ago
- ☆30Updated 4 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- ☆19Updated 6 years ago
- In this paper, we show that the performance of a learnt generative model is closely related to the model's ability to accurately represen…☆41Updated 4 years ago
- ☆41Updated 2 years ago
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning (ICLR 2020)☆33Updated 4 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- Fine-grained ImageNet annotations☆30Updated 5 years ago
- SGD and Ordered SGD codes for deep learning, SVM, and logistic regression☆36Updated 5 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆54Updated 6 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆25Updated 5 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆34Updated 4 years ago
- This repo contains the code used for NeurIPS 2019 paper "Asymmetric Valleys: Beyond Sharp and Flat Local Minima".☆14Updated 5 years ago
- ☆45Updated 5 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆15Updated 4 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- ☆32Updated 4 years ago
- code to reproduce the empirical results in the research paper☆36Updated 3 years ago
- ☆61Updated 2 years ago