RobertCsordas / transformer_generalization
The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We significantly improve the systematic generalization of transformer models on a variety of datasets using simple tricks and careful considerations.
☆66Updated last year
Related projects ⓘ
Alternatives and complementary repositories for transformer_generalization
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 2 years ago
- GPT, but made only out of MLPs☆86Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆78Updated 3 years ago
- ☆38Updated 3 years ago
- ☆63Updated 2 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆59Updated last year
- ☆44Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆94Updated last year
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆69Updated 2 months ago
- Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)☆11Updated 8 months ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆26Updated 3 years ago
- ☆77Updated 3 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated last year
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆59Updated 2 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆47Updated last year
- ☆17Updated 9 months ago
- ☆42Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆99Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Code for the paper "Implicit Representations of Meaning in Neural Language Models"☆49Updated last year
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated last year
- ☆34Updated 5 months ago
- ☆22Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆77Updated 2 years ago
- ☆42Updated 4 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago