google-deepmind / dksLinks
Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural network models (and their initializations) to make them easier to train.
☆74Updated 5 months ago
Alternatives and similar repositories for dks
Users that are interested in dks are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆60Updated 3 years ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated this week
- ☆62Updated last year
- Meta-learning inductive biases in the form of useful conserved quantities.☆38Updated 3 years ago
- Open source code for EigenGame.☆33Updated 2 years ago
- ☆60Updated 3 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- minGPT in JAX☆48Updated 3 years ago
- ☆118Updated last month
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Differentiable Algorithms and Algorithmic Supervision.☆116Updated 2 years ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- ☆62Updated 3 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Latent Diffusion Language Models☆70Updated 2 years ago
- Neural Networks for JAX☆84Updated last year
- Running Jax in PyTorch Lightning☆115Updated 11 months ago
- Jax like function transformation engine but micro, microjax☆33Updated last year
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆117Updated 3 years ago
- ☆31Updated 3 weeks ago
- Code for papers Linear Algebra with Transformers (TMLR) and What is my Math Transformer Doing? (AI for Maths Workshop, Neurips 2022)☆76Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 10 months ago
- FID computation in Jax/Flax.☆29Updated last year
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 4 years ago
- LoRA for arbitrary JAX models and functions☆143Updated last year
- ☆34Updated last year
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆29Updated 5 years ago