davisyoshida / haiku-mupLinks
A port of muP to JAX/Haiku
☆25Updated 3 years ago
Alternatives and similar repositories for haiku-mup
Users that are interested in haiku-mup are comparing it to the libraries listed below
Sorting:
- ☆17Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- If it quacks like a tensor...☆59Updated last year
- A functional training loops library for JAX☆88Updated last year
- JMP is a Mixed Precision library for JAX.☆211Updated 9 months ago
- A JAX implementation of stochastic addition.☆14Updated 3 years ago
- minGPT in JAX☆48Updated 3 years ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆187Updated 3 weeks ago
- ☆107Updated last year
- JAX Synergistic Memory Inspector☆179Updated last year
- ☆39Updated last year
- Silly twitter torch implementations.☆46Updated 3 years ago
- ☆62Updated 3 years ago
- ☆24Updated 6 years ago
- [NeurIPS'19] Deep Equilibrium Models Jax Implementation☆42Updated 5 years ago
- A simple library for scaling up JAX programs☆144Updated last week
- Fast Discounted Cumulative Sums in PyTorch☆96Updated 4 years ago
- ☆159Updated last year
- Train very large language models in Jax.☆209Updated 2 years ago
- Jax/Flax rewrite of Karpathy's nanoGPT☆62Updated 2 years ago
- Sequence Modeling with Structured State Spaces☆66Updated 3 years ago
- Easy Hypernetworks in Pytorch and Jax☆105Updated 2 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆38Updated 2 years ago
- Neural Networks for JAX☆84Updated last year
- Einsum-like high-level array sharding API for JAX☆34Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- ☆60Updated 3 years ago
- Tidy autoregressive inference in JAX☆15Updated 2 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆37Updated last year