sarthmit / Compositional-AttentionLinks
Code to reproduce the results for Compositional Attention
☆59Updated 3 years ago
Alternatives and similar repositories for Compositional-Attention
Users that are interested in Compositional-Attention are comparing it to the libraries listed below
Sorting:
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- ☆22Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆36Updated 5 years ago
- ☆33Updated 4 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated 2 years ago
- Reparameterize your PyTorch modules☆71Updated 5 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆125Updated 5 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆57Updated 2 years ago
- ☆81Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- How certain is your transformer?☆25Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- ☆85Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Updated 7 months ago
- ☆19Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Updated 2 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Updated 4 years ago
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- Code used in "Understanding Dimensional Collapse in Contrastive Self-supervised Learning" paper.☆79Updated 3 years ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆41Updated 3 years ago
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆47Updated 2 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- TLDR is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectivene…☆126Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Updated 7 months ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 5 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 3 years ago