davda54 / ada-hessian
Easy-to-use AdaHessian optimizer (PyTorch)
☆78Updated 4 years ago
Alternatives and similar repositories for ada-hessian:
Users that are interested in ada-hessian are comparing it to the libraries listed below
- 🧀 Pytorch code for the Fromage optimiser.☆124Updated 9 months ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 9 months ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- ☆99Updated 3 years ago
- Structured matrices for compressing neural networks☆66Updated last year
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆275Updated 2 years ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆209Updated last year
- PyTorch-SSO: Scalable Second-Order methods in PyTorch☆145Updated last year
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆180Updated 3 years ago
- DeepOBS: A Deep Learning Optimizer Benchmark Suite☆106Updated last year
- Very deep VAEs in JAX/Flax☆46Updated 3 years ago
- ☆33Updated 4 years ago
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆109Updated 2 years ago
- Codebase for Learning Invariances in Neural Networks☆95Updated 2 years ago
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- Monotone operator equilibrium networks☆51Updated 4 years ago
- Fast Discounted Cumulative Sums in PyTorch☆95Updated 3 years ago
- ☆67Updated last year
- A custom PyTorch layer that is capable of implementing extremely wide and sparse linear layers efficiently☆49Updated last year
- Experiment code for "Randomized Automatic Differentiation"☆67Updated 4 years ago
- Toy implementations of some popular ML optimizers using Python/JAX☆44Updated 3 years ago
- Official code for UnICORNN (ICML 2021)☆27Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- repo for paper: Adaptive Checkpoint Adjoint (ACA) method for gradient estimation in neural ODE☆54Updated 4 years ago
- ☆153Updated 4 years ago
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆64Updated 4 years ago
- ☆37Updated 3 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- Efficient Householder Transformation in PyTorch☆65Updated 3 years ago
- ☆166Updated 9 months ago