google-deepmind / dksLinks
Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural network models (and their initializations) to make them easier to train.
☆71Updated last month
Alternatives and similar repositories for dks
Users that are interested in dks are comparing it to the libraries listed below
Sorting:
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆179Updated last week
- ☆60Updated 3 years ago
- ☆31Updated last month
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- minGPT in JAX☆48Updated 3 years ago
- ☆53Updated 10 months ago
- Jax like function transformation engine but micro, microjax☆33Updated 9 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Latent Diffusion Language Models☆69Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- ☆28Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆12Updated last week
- Differentiable Algorithms and Algorithmic Supervision.☆116Updated 2 years ago
- ☆115Updated last week
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆31Updated 2 years ago
- ☆61Updated 3 years ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 6 months ago
- ☆32Updated 10 months ago
- Neural Networks for JAX☆84Updated 10 months ago
- Code for papers Linear Algebra with Transformers (TMLR) and What is my Math Transformer Doing? (AI for Maths Workshop, Neurips 2022)☆75Updated 11 months ago
- RWKV model implementation☆38Updated 2 years ago
- Running Jax in PyTorch Lightning☆109Updated 7 months ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- ☆34Updated 10 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year