giannisdaras / smyrf
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".
☆49Updated last year
Alternatives and similar repositories for smyrf
Users that are interested in smyrf are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of Spatial Dependency Networks.☆35Updated 4 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- ☆24Updated last year
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- Pytorch Implemetation for our NAACL2019 Paper "Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling" http…☆62Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Code for reversible recurrent neural networks☆39Updated 6 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- Masked Convolutional Flow☆59Updated 5 years ago
- ☆41Updated 2 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆56Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 3 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- ☆63Updated 3 years ago
- Hyperbolic Neural Networks, pytorch☆86Updated 5 years ago
- Code for "MIM: Mutual Information Machine" paper.☆16Updated 2 years ago
- ☆21Updated 2 years ago
- Invertible Generative Flows☆83Updated 4 years ago
- Code to reproduce the results for Compositional Attention☆60Updated 2 years ago