lucidrains / compositional-attention-pytorchLinks
Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process with disentangled search and retrieval head aggregation, in Pytorch
☆51Updated 3 years ago
Alternatives and similar repositories for compositional-attention-pytorch
Users that are interested in compositional-attention-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆53Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Implementation of LogAvgExp for Pytorch☆36Updated last month
- ☆29Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 3 years ago
- ☆27Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆59Updated 2 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated last year
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆50Updated 3 years ago
- ☆33Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- ☆21Updated 2 years ago
- Un-*** 50 billions multimodality dataset☆22Updated 2 years ago
- A convolution-free, transformer-only version of the CycleGAN framework☆33Updated 3 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated last year
- Implementation of Metaformer, but in an autoregressive manner☆25Updated 2 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆53Updated 4 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆58Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- ☆8Updated last year
- ☆47Updated 2 years ago
- A project to improve out-of-distribution detection (open set recognition) and uncertainty estimation by changing a few lines of code in y…☆45Updated 2 years ago
- PyTorch reimplementation of the paper "HyperMixer: An MLP-based Green AI Alternative to Transformers" [arXiv 2022].☆17Updated 3 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated last year