lucidrains / deep-linear-network
A simple implementation of a deep linear Pytorch module
☆20Updated 4 years ago
Alternatives and similar repositories for deep-linear-network:
Users that are interested in deep-linear-network are comparing it to the libraries listed below
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- A simple Transformer where the softmax has been replaced with normalization☆19Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 3 years ago
- Local Attention - Flax module for Jax☆20Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆56Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆53Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 6 months ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 2 years ago
- A convolution-free, transformer-only version of the CycleGAN framework☆33Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago
- reproduces experiments from "Grounding inductive biases in natural images: invariance stems from variations in data"☆17Updated 7 months ago
- ☆24Updated last year
- ☆29Updated 2 years ago
- Layerwise Batch Entropy Regularization☆22Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated last year
- ☆21Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆52Updated 4 years ago
- Implementation of Kronecker Attention in Pytorch☆18Updated 4 years ago