sarthmit / Compositional-Attention
Code to reproduce the results for Compositional Attention
☆60Updated 2 years ago
Alternatives and similar repositories for Compositional-Attention:
Users that are interested in Compositional-Attention are comparing it to the libraries listed below
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Code for Reparameterizable Subset Sampling via Continuous Relaxations, IJCAI 2019.☆54Updated last year
- ☆22Updated 3 years ago
- ☆80Updated 6 months ago
- ☆36Updated 4 years ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Updated 6 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆26Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- An adaptive training algorithm for residual network☆15Updated 4 years ago
- ☆33Updated 3 years ago
- Reparameterize your PyTorch modules☆70Updated 4 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆102Updated 3 years ago
- ☆65Updated 6 months ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆111Updated 4 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆48Updated last year
- ☆29Updated 3 years ago
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆44Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆37Updated 3 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- Code used in "Understanding Dimensional Collapse in Contrastive Self-supervised Learning" paper.☆76Updated 2 years ago
- ☆51Updated 2 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆38Updated 3 years ago