giannisdaras / smyrfLinks
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".
☆50Updated last year
Alternatives and similar repositories for smyrf
Users that are interested in smyrf are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated 2 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆60Updated 2 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆25Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 4 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆16Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Official code for paper "Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors"☆28Updated 5 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- ☆11Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 4 years ago
- ☆21Updated 2 years ago
- Very deep VAEs in JAX/Flax☆46Updated 4 years ago
- ☆41Updated 4 years ago
- Code for UAI 2020 paper "Locally Masked Convolution for Autoregressive Models"☆78Updated 5 years ago