mcbal / deep-implicit-attention
Implementation of deep implicit attention in PyTorch
☆64Updated 3 years ago
Alternatives and similar repositories for deep-implicit-attention:
Users that are interested in deep-implicit-attention are comparing it to the libraries listed below
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆24Updated 4 years ago
- [NeurIPS 2020] Neural Manifold Ordinary Differential Equations (https://arxiv.org/abs/2006.10254)☆115Updated last year
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- ☆49Updated 4 years ago
- ☆33Updated last year
- Experiments for Meta-Learning Symmetries by Reparameterization☆56Updated 3 years ago
- ☆21Updated last year
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆72Updated 6 months ago
- Lie Algebra Convolutional Network implementation☆42Updated 3 years ago
- Jupyter Notebook corresponding to 'Going with the Flow: An Introduction to Normalizing Flows'☆25Updated 3 years ago
- Pytorch implementation of the Power Spherical distribution☆74Updated 7 months ago
- JAX exponential map normalising flows on sphere☆17Updated 4 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆88Updated 4 years ago
- Official repository for our ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology☆35Updated 3 years ago
- ☆18Updated 3 years ago
- Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation☆67Updated 2 years ago
- Riemannian Convex Potential Maps☆67Updated last year
- ☆31Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆102Updated 3 years ago
- VAEs with Lie Group latent space☆97Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆48Updated last year
- Code for "'Hey, that's not an ODE:' Faster ODE Adjoints via Seminorms" (ICML 2021)☆86Updated 2 years ago
- Implementation of approximate free-energy minimization in PyTorch☆18Updated 3 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 4 years ago
- Very deep VAEs in JAX/Flax☆46Updated 3 years ago
- Normalizing Flows in Jax☆107Updated 4 years ago
- ☆53Updated 6 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Structured matrices for compressing neural networks☆66Updated last year