Cerebras / online-normalization
Online Normalization for Training Neural Networks (Companion Repository)
☆81Updated 4 years ago
Alternatives and similar repositories for online-normalization
Users that are interested in online-normalization are comparing it to the libraries listed below
Sorting:
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 5 years ago
- GBDT-NAS☆28Updated 3 years ago
- Efficient DataLoader for PyTorch and Keras for loading datasets from web servers and object stores.☆29Updated 5 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 4 years ago
- diffGrad: An Optimization Method for Convolutional Neural Networks☆55Updated 2 years ago
- Code to accompany the paper "Hierarchical Quantized Autoencoders"☆37Updated last year
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- "Learning Rate Dropout" in PyTorch☆34Updated 5 years ago
- SelectiveBackprop accelerates training by dynamically prioritizing useful examples with high loss☆32Updated 5 years ago
- Partially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep …☆39Updated 2 years ago
- ☆39Updated 5 years ago
- Exploiting Uncertainty of Loss Landscape for Stochastic Optimization☆15Updated 5 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆86Updated 4 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆153Updated 5 years ago
- Unofficial pytorch implementation of ReZero in ResNet☆23Updated 5 years ago
- Automatic learning-rate scheduler☆45Updated 4 years ago
- ☆47Updated 4 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆78Updated 4 years ago
- An implementation of shampoo☆74Updated 7 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 9 months ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- ☆70Updated 5 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- A small demonstration of using WebDataset with ImageNet and PyTorch Lightning☆74Updated last year
- Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks☆18Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- ☆23Updated 6 years ago
- Structured matrices for compressing neural networks☆66Updated last year