facebookresearch / projUNNLinks
Fast training of unitary deep network layers from low-rank updates
☆28Updated 2 years ago
Alternatives and similar repositories for projUNN
Users that are interested in projUNN are comparing it to the libraries listed below
Sorting:
- ☆51Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆84Updated last year
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivative…☆17Updated last year
- ☆53Updated 9 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆71Updated 2 weeks ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- MaskedTensors for PyTorch☆38Updated 3 years ago
- JAX implementation of "Fine-Tuning Language Models with Just Forward Passes"☆19Updated 2 years ago
- ☆23Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- FID computation in Jax/Flax.☆28Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- ☆32Updated 9 months ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 9 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆31Updated 7 months ago
- Open source code for paper "On the Learning and Learnability of Quasimetrics".☆32Updated 2 years ago
- Code for "Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations"☆23Updated 2 years ago
- ☆32Updated last year
- Image augmentation library for Jax☆39Updated last year
- Efficient Householder Transformation in PyTorch☆66Updated 4 years ago
- Blog post☆17Updated last year
- The Energy Transformer block, in JAX☆57Updated last year
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆49Updated last month