google-deepmind / dksLinks
Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural network models (and their initializations) to make them easier to train.
☆74Updated 4 months ago
Alternatives and similar repositories for dks
Users that are interested in dks are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated last month
- ☆117Updated last week
- Meta-learning inductive biases in the form of useful conserved quantities.☆38Updated 3 years ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Jax like function transformation engine but micro, microjax☆33Updated last year
- ☆61Updated last year
- ☆60Updated 3 years ago
- Neural Networks for JAX☆84Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Open source code for EigenGame.☆33Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- Latent Diffusion Language Models☆69Updated 2 years ago
- Running Jax in PyTorch Lightning☆114Updated 11 months ago
- ☆62Updated 3 years ago
- ☆31Updated last week
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated last year
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆32Updated 2 years ago
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆115Updated 3 years ago
- ☆34Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆62Updated last month
- ☆192Updated 4 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 9 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 4 years ago
- ☆79Updated this week
- JAX implementation of Learning to learn by gradient descent by gradient descent☆28Updated 3 months ago
- A functional training loops library for JAX☆88Updated last year
- The Energy Transformer block, in JAX☆61Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year