lucidrains / all-normalization-transformer
A simple Transformer where the softmax has been replaced with normalization
☆19Updated 4 years ago
Alternatives and similar repositories for all-normalization-transformer:
Users that are interested in all-normalization-transformer are comparing it to the libraries listed below
- A simple implementation of a deep linear Pytorch module☆19Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Local Attention - Flax module for Jax☆20Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- ☆21Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆31Updated 2 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆26Updated 4 months ago
- A JAX nn library☆21Updated 11 months ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- High performance pytorch modules☆18Updated 2 years ago
- Implementation of LogAvgExp for Pytorch☆33Updated 2 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆24Updated 4 years ago
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 3 years ago
- Utilities for PyTorch distributed☆23Updated last year
- reproduces experiments from "Grounding inductive biases in natural images: invariance stems from variations in data"☆17Updated 4 months ago
- ☆29Updated 2 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆24Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- Very deep VAEs in JAX/Flax☆46Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- ☆24Updated 9 months ago
- Flax (JAX) implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation☆12Updated 3 years ago
- JAX implementation of Graph Attention Networks☆13Updated 3 years ago
- AdaCat☆49Updated 2 years ago
- ESGD-M is a stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch.☆56Updated 2 years ago