sarthmit / Compositional-AttentionLinks
Code to reproduce the results for Compositional Attention
☆60Updated 2 years ago
Alternatives and similar repositories for Compositional-Attention
Users that are interested in Compositional-Attention are comparing it to the libraries listed below
Sorting:
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆33Updated 4 years ago
- ☆36Updated 4 years ago
- ☆65Updated 10 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- ☆22Updated 3 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- ☆30Updated 3 years ago
- ☆84Updated 10 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆56Updated last year
- Low-variance and unbiased gradient for backpropagation through categorical random variables, with application in variational auto-encoder…☆17Updated 4 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆49Updated 2 years ago
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated last year
- Differentiable Top-k Classification Learning☆82Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆41Updated 4 years ago
- ☆19Updated 3 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- How certain is your transformer?☆25Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Updated 7 years ago
- Code for the NeurIPS 2018 paper "On Controllable Sparse Alternatives to Softmax"☆24Updated 5 years ago
- Code used in "Understanding Dimensional Collapse in Contrastive Self-supervised Learning" paper.☆77Updated 2 years ago
- ☆81Updated 10 months ago
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆63Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- ☆51Updated 2 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆39Updated 4 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆76Updated last year