nestordemeure / flaxOptimizersLinks
A collection of optimizers, some arcane others well known, for Flax.
☆29Updated 4 years ago
Alternatives and similar repositories for flaxOptimizers
Users that are interested in flaxOptimizers are comparing it to the libraries listed below
Sorting:
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- a lightweight transformer library for PyTorch☆72Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 4 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Large dataset storage format for Pytorch☆45Updated 4 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Image augmentation library for Jax☆40Updated last year
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- Layerwise Batch Entropy Regularization☆24Updated 3 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆79Updated 5 years ago
- Python Research Framework☆107Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆131Updated 3 years ago
- 👑 Pytorch code for the Nero optimiser.☆20Updated 3 years ago
- Differentiable Algorithms and Algorithmic Supervision.☆116Updated 2 years ago
- High performance pytorch modules☆18Updated 2 years ago
- A lightweight library for tensorflow 2.0☆65Updated 6 years ago
- AdaCat☆49Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆129Updated last year
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 5 years ago
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆118Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Updated 4 years ago
- An open source implementation of CLIP.☆33Updated 3 years ago
- Toy implementations of some popular ML optimizers using Python/JAX☆44Updated 4 years ago
- ☆21Updated 2 years ago
- Functional deep learning☆108Updated 3 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago