Cerebras / online-normalization
Online Normalization for Training Neural Networks (Companion Repository)
☆79Updated 3 years ago
Alternatives and similar repositories for online-normalization:
Users that are interested in online-normalization are comparing it to the libraries listed below
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 4 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 4 years ago
- SelectiveBackprop accelerates training by dynamically prioritizing useful examples with high loss☆32Updated 4 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆77Updated 4 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 3 years ago
- GBDT-NAS☆28Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Structured matrices for compressing neural networks☆66Updated last year
- ☆47Updated 4 years ago
- Unofficial pytorch implementation of ReZero in ResNet☆23Updated 4 years ago
- Automatic learning-rate scheduler☆44Updated 3 years ago
- Efficient DataLoader for PyTorch and Keras for loading datasets from web servers and object stores.☆29Updated 5 years ago
- Code to accompany the paper "Hierarchical Quantized Autoencoders"☆37Updated last year
- [ICML 2024] SIRFShampoo: Structured inverse- and root-free Shampoo in PyTorch (https://arxiv.org/abs/2402.03496)☆14Updated 2 months ago
- Make TFRecord Usable Again☆87Updated last year
- Code base for SRSGD.☆28Updated 4 years ago
- Official code for Long Expressive Memory (ICLR 2022, Spotlight)☆69Updated 2 years ago
- Layerwise Batch Entropy Regularization☆22Updated 2 years ago
- An implementation of shampoo☆74Updated 6 years ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆40Updated 3 years ago
- PyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)☆64Updated 4 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆86Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 6 months ago
- A Re-implementation of Fixed-update Initialization☆152Updated 5 years ago
- Filter Response Normalization tested on better ImageNet baselines.☆35Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- ☆62Updated 4 years ago
- A Pytorch Implementation of Natural Gradient Descent☆44Updated 5 years ago
- diffGrad: An Optimization Method for Convolutional Neural Networks☆55Updated 2 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆37Updated 4 years ago