f-dangel / singdLinks
[ICML 2024] SINGD: KFAC-like Structured Inverse-Free Natural Gradient Descent (http://arxiv.org/abs/2312.05705)
☆24Updated last year
Alternatives and similar repositories for singd
Users that are interested in singd are comparing it to the libraries listed below
Sorting:
- [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivative…☆17Updated 2 years ago
- ☆62Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆191Updated last week
- ☆33Updated last year
- ☆52Updated last year
- ☆33Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆77Updated last year
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆29Updated 4 years ago
- ☆52Updated last month
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- Convex potential flows☆85Updated 4 years ago
- ☆35Updated last year
- Blog post☆17Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- Efficient Householder Transformation in PyTorch☆69Updated 4 years ago
- FID computation in Jax/Flax.☆29Updated last year
- ☆13Updated 2 weeks ago
- Transformers with doubly stochastic attention☆52Updated 3 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆75Updated 6 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Open source code for EigenGame.☆34Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- ☆35Updated last year
- Layerwise Batch Entropy Regularization☆24Updated 3 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago