JesseFarebro / flax-mupLinks
Maximal Update Parametrization (μP) with Flax & Optax.
☆16Updated last year
Alternatives and similar repositories for flax-mup
Users that are interested in flax-mup are comparing it to the libraries listed below
Sorting:
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- ☆118Updated 3 months ago
- Implementation of PSGD optimizer in JAX☆34Updated 8 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 weeks ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Pytorch-like dataloaders for JAX.☆94Updated 3 months ago
- Flow-matching algorithms in JAX☆104Updated last year
- ☆34Updated 9 months ago
- Lightning-like training API for JAX with Flax☆42Updated 9 months ago
- Minimal yet performant LLM examples in pure JAX☆152Updated this week
- 🧱 Modula software package☆237Updated 3 weeks ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Einsum-like high-level array sharding API for JAX☆35Updated last year
- Run PyTorch in JAX. 🤝☆284Updated last week
- nanoGPT using Equinox☆13Updated 2 years ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆22Updated 2 weeks ago
- Jax/Flax rewrite of Karpathy's nanoGPT☆60Updated 2 years ago
- Minimal, lightweight JAX implementations of popular models.☆105Updated this week
- Running Jax in PyTorch Lightning☆113Updated 8 months ago
- ☆57Updated 11 months ago
- ☆279Updated last year
- ☆65Updated 10 months ago
- ☆210Updated 9 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 7 months ago
- A simple, performant and scalable JAX-based world modeling codebase☆72Updated last week
- JAX Arrays for human consumption☆106Updated 2 months ago
- Accelerated First Order Parallel Associative Scan☆188Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆36Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆36Updated last year