mcbal / deep-implicit-attention
Implementation of deep implicit attention in PyTorch
☆63Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for deep-implicit-attention
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Transformers with doubly stochastic attention☆40Updated 2 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆23Updated 3 years ago
- Riemannian Convex Potential Maps☆68Updated last year
- Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation☆68Updated 2 years ago
- JAX exponential map normalising flows on sphere☆17Updated 4 years ago
- Very deep VAEs in JAX/Flax☆45Updated 3 years ago
- ☆33Updated last year
- [NeurIPS 2020] Neural Manifold Ordinary Differential Equations (https://arxiv.org/abs/2006.10254)☆115Updated last year
- Pytorch implementation of the Power Spherical distribution☆73Updated 4 months ago
- General Invertible Transformations for Flow-based Generative Models☆17Updated 3 years ago
- Code for the Thermodynamic Variational Objective☆26Updated 2 years ago
- ☆49Updated 4 years ago
- Monotone operator equilibrium networks☆51Updated 4 years ago
- ☆53Updated 3 months ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆69Updated 3 months ago
- Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"☆77Updated 2 years ago
- A minimal implementation of a VAE with BinConcrete (relaxed Bernoulli) latent distribution in TensorFlow.☆21Updated 4 years ago
- [ICML'21 Oral] Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding☆14Updated 3 years ago
- [NeurIPS'19] Deep Equilibrium Models Jax Implementation☆38Updated 4 years ago
- Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561☆25Updated 3 years ago
- ☆22Updated 4 years ago
- ☆31Updated 4 years ago
- Experiments for Meta-Learning Symmetries by Reparameterization☆56Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆47Updated last year
- ICML 2020 Paper: Latent Variable Modelling with Hyperbolic Normalizing Flows☆54Updated last year
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 4 years ago
- code for "Neural Conservation Laws A Divergence-Free Perspective".☆35Updated last year
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆87Updated 3 years ago
- Code for "'Hey, that's not an ODE:' Faster ODE Adjoints via Seminorms" (ICML 2021)☆86Updated 2 years ago