nestordemeure / flaxOptimizers
A collection of optimizers, some arcane others well known, for Flax.
☆29Updated 3 years ago
Alternatives and similar repositories for flaxOptimizers:
Users that are interested in flaxOptimizers are comparing it to the libraries listed below
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- 👑 Pytorch code for the Nero optimiser.☆20Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- A simple Transformer where the softmax has been replaced with normalization☆19Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- ☆29Updated 2 years ago
- AdaCat☆49Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆18Updated 4 years ago
- High performance pytorch modules☆18Updated 2 years ago
- PyTorch implementation of GLOM☆22Updated 3 years ago
- ☆21Updated 2 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- ☆27Updated 4 years ago
- A JAX nn library☆21Updated last month
- Image augmentation library for Jax☆39Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated last year
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆15Updated 3 years ago
- Large dataset storage format for Pytorch☆45Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆19Updated 4 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 5 months ago
- Toy implementations of some popular ML optimizers using Python/JAX☆44Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- A framework for implementing equivariant DL☆10Updated 3 years ago
- Hacks for PyTorch☆19Updated last year