IDSIA / recurrent-fwp
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)
☆49Updated 2 years ago
Alternatives and similar repositories for recurrent-fwp
Users that are interested in recurrent-fwp are comparing it to the libraries listed below
Sorting:
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆25Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- Implementation of Hierarchical Transformer Memory (HTM) for Pytorch☆73Updated 3 years ago
- Generalised UDRL☆37Updated 3 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Variational Reinforcement Learning☆16Updated 9 months ago
- ☆29Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- ☆17Updated last year
- ☆23Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆15Updated 3 years ago
- ☆24Updated last year
- PyTorch Package For Quasimetric Learning☆42Updated 6 months ago
- An adaptive training algorithm for residual network☆15Updated 4 years ago
- ☆53Updated 6 months ago
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆44Updated 2 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆56Updated 2 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- Solving reinforcement learning tasks which require language and vision☆32Updated 2 years ago
- ☆49Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Experiments for Meta-Learning Symmetries by Reparameterization☆56Updated 4 years ago
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆63Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 3 years ago
- ☆33Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago