f-dangel / singdLinks
[ICML 2024] SINGD: KFAC-like Structured Inverse-Free Natural Gradient Descent (http://arxiv.org/abs/2312.05705)
☆22Updated 8 months ago
Alternatives and similar repositories for singd
Users that are interested in singd are comparing it to the libraries listed below
Sorting:
- [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivative…☆17Updated last year
- ☆16Updated 2 years ago
- ☆53Updated 9 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago
- Transformers with doubly stochastic attention☆46Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆124Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated 2 years ago
- ☆51Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆179Updated last month
- ☆32Updated 9 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- SGD with large step sizes learns sparse features [ICML 2023]☆32Updated 2 years ago
- Code for "SAM as an Optimal Relaxation of Bayes", ICLR 2023.☆25Updated last year
- ☆10Updated 2 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- PyTorch linear operators for curvature matrices (Hessian, Fisher/GGN, KFAC, ...)☆40Updated 2 months ago
- [ICML 2024] SIRFShampoo: Structured inverse- and root-free Shampoo in PyTorch (https://arxiv.org/abs/2402.03496)☆14Updated 8 months ago
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness☆46Updated 4 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated last year
- Convolutions and more as einsum for PyTorch☆17Updated last year
- ☆70Updated 7 months ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆41Updated 2 years ago
- Sketched matrix decompositions for PyTorch☆70Updated 2 weeks ago
- ASDL: Automatic Second-order Differentiation Library for PyTorch☆188Updated 7 months ago
- FID computation in Jax/Flax.☆28Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆33Updated 2 years ago