IDSIA / recurrent-fwpLinks
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)
☆50Updated 5 months ago
Alternatives and similar repositories for recurrent-fwp
Users that are interested in recurrent-fwp are comparing it to the libraries listed below
Sorting:
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆110Updated 4 years ago
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆47Updated 2 years ago
- Implementation of Hierarchical Transformer Memory (HTM) for Pytorch☆76Updated 4 years ago
- Variational Reinforcement Learning☆16Updated last year
- ☆57Updated last year
- Generalised UDRL☆37Updated 3 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆25Updated 4 years ago
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆66Updated 3 years ago
- ☆30Updated 3 years ago
- JAX implementation of Graph Attention Networks☆13Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Experiments for Meta-Learning Symmetries by Reparameterization☆58Updated 4 years ago
- An adaptive training algorithm for residual network☆17Updated 5 years ago
- Implementation of deep implicit attention in PyTorch☆65Updated 4 years ago
- Open source code for paper "On the Learning and Learnability of Quasimetrics".☆32Updated 3 years ago
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆38Updated 3 years ago
- ☆23Updated 4 years ago
- ☆50Updated 5 years ago
- Code for A General Recipe for Likelihood-free Bayesian Optimization, ICML 2022☆45Updated 3 years ago
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- Official code for "Can Wikipedia Help Offline Reinforcement Learning?" by Machel Reid, Yutaro Yamada and Shixiang Shane Gu☆106Updated 3 years ago
- Code associated with our paper "Learning Group Structure and Disentangled Representations of Dynamical Environments"☆15Updated 2 years ago
- ☆80Updated 2 years ago
- Code for "Recurrent Independent Mechanisms"☆120Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- ☆17Updated last year
- Fast Discounted Cumulative Sums in PyTorch☆96Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 2 years ago