lucidrains / all-normalization-transformerLinks
A simple Transformer where the softmax has been replaced with normalization
☆20Updated 4 years ago
Alternatives and similar repositories for all-normalization-transformer
Users that are interested in all-normalization-transformer are comparing it to the libraries listed below
Sorting:
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 2 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 7 months ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- ☆21Updated 2 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- ☆24Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆50Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago
- Implementation of LogAvgExp for Pytorch☆36Updated last month
- A JAX nn library☆21Updated 3 months ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Very deep VAEs in JAX/Flax☆46Updated 3 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆29Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆25Updated 4 years ago
- ☆11Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆53Updated 2 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆59Updated 2 years ago
- JAX implementation of Graph Attention Networks☆13Updated 3 years ago
- AdaCat☆49Updated 2 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated last year
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated 2 years ago