ischlag / fast-weight-transformersLinks
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.
☆111Updated 4 years ago
Alternatives and similar repositories for fast-weight-transformers
Users that are interested in fast-weight-transformers are comparing it to the libraries listed below
Sorting:
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Updated 7 months ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated 2 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Updated 2 years ago
- ☆33Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- Implementation of deep implicit attention in PyTorch☆65Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 3 years ago
- Fast Discounted Cumulative Sums in PyTorch☆97Updated 4 years ago
- Code to reproduce the results for Compositional Attention☆59Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 6 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- Differentiable Sorting Networks☆125Updated 2 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆170Updated 4 years ago
- Axial Positional Embedding for Pytorch☆84Updated 10 months ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆125Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- Structured matrices for compressing neural networks☆67Updated 2 years ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆208Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Updated 2 months ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆153Updated 2 years ago
- CUDA kernels for generalized matrix-multiplication in PyTorch☆85Updated 4 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Reparameterize your PyTorch modules☆71Updated 5 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 5 years ago