davda54 / ada-hessianLinks
Easy-to-use AdaHessian optimizer (PyTorch)
β79Updated 5 years ago
Alternatives and similar repositories for ada-hessian
Users that are interested in ada-hessian are comparing it to the libraries listed below
Sorting:
- π§ Pytorch code for the Fromage optimiser.β129Updated last year
- Structured matrices for compressing neural networksβ67Updated 2 years ago
- π© Pytorch and Jax code for the Madam optimiser.β53Updated 4 years ago
- PyTorch-SSO: Scalable Second-Order methods in PyTorchβ147Updated 2 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"β184Updated 4 years ago
- β100Updated 3 years ago
- PadΓ© Activation Units: End-to-end Learning of Activation Functions in Deep Neural Networkβ63Updated 4 years ago
- Codebase for Learning Invariances in Neural Networksβ96Updated 3 years ago
- Efficient Householder Transformation in PyTorchβ66Updated 4 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitableβ217Updated 4 years ago
- Very deep VAEs in JAX/Flaxβ46Updated 4 years ago
- Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimationβ67Updated 3 years ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilitiesβ208Updated last year
- β50Updated 5 years ago
- ASDL: Automatic Second-order Differentiation Library for PyTorchβ190Updated 11 months ago
- Fast Discounted Cumulative Sums in PyTorchβ96Updated 4 years ago
- DeepOBS: A Deep Learning Optimizer Benchmark Suiteβ108Updated last year
- CUDA kernels for generalized matrix-multiplication in PyTorchβ85Updated 4 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.β77Updated last year
- Bayesianize: A Bayesian neural network wrapper in pytorchβ89Updated last year
- Pytorch implementation of Variational Dropout Sparsifies Deep Neural Networksβ84Updated 3 years ago
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learningβ281Updated 2 years ago
- Official code for the Stochastic Polyak step-size optimizerβ139Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.β108Updated 4 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)β63Updated 4 years ago
- A custom PyTorch layer that is capable of implementing extremely wide and sparse linear layers efficientlyβ51Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β187Updated 3 weeks ago
- Layerwise Batch Entropy Regularizationβ24Updated 3 years ago
- β47Updated 4 years ago
- β33Updated 5 years ago