facebookresearch / projUNN
Fast training of unitary deep network layers from low-rank updates
☆28Updated 2 years ago
Alternatives and similar repositories for projUNN:
Users that are interested in projUNN are comparing it to the libraries listed below
- ☆53Updated 7 months ago
- [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivative…☆17Updated last year
- ☆32Updated 7 months ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Deep Networks Grok All the Time and Here is Why☆34Updated 11 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- ☆51Updated 10 months ago
- JAX implementation of "Fine-Tuning Language Models with Just Forward Passes"☆19Updated last year
- The Energy Transformer block, in JAX☆57Updated last year
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- ☆22Updated 2 years ago
- Blog post☆17Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 3 years ago
- ☆49Updated last year
- ☆11Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo …☆70Updated last week
- ☆16Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 9 months ago
- A selection of neural network models ported from torchvision for JAX & Flax.☆44Updated 4 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 7 months ago
- Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]☆32Updated 8 months ago
- Implementation of PSGD optimizer in JAX☆33Updated 4 months ago
- [ICML 2024] SINGD: KFAC-like Structured Inverse-Free Natural Gradient Descent (http://arxiv.org/abs/2312.05705)☆21Updated 6 months ago
- FID computation in Jax/Flax.☆27Updated 9 months ago
- ☆43Updated last month
- ☆30Updated 5 months ago
- If it quacks like a tensor...☆58Updated 5 months ago