ischlag / Fast-Weight-Memory-publicLinks
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.
☆28Updated 4 years ago
Alternatives and similar repositories for Fast-Weight-Memory-public
Users that are interested in Fast-Weight-Memory-public are comparing it to the libraries listed below
Sorting:
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Updated 6 months ago
- lanmt ebm☆12Updated 5 years ago
- ☆45Updated 4 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- ☆14Updated 3 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- Code to reproduce the results for Compositional Attention☆59Updated 3 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 3 years ago
- Source Code for paper "Learning from Explanations with Neural Execution Tree", ICLR 2020☆18Updated 4 years ago
- ☆62Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- ☆13Updated 3 years ago
- Code accompanying ICML 2021 paper "Few-shot Language Coordination by Modeling Theory of Mind"☆19Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆50Updated 4 years ago
- Code Release for "On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies"☆16Updated 4 years ago
- ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation☆25Updated 5 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 3 years ago
- ☆52Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- ☆50Updated 4 years ago
- ☆44Updated 5 years ago
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆16Updated 2 years ago
- PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]☆57Updated 4 years ago
- ☆30Updated 3 years ago
- Exploring Few-Shot Adaptation of Language Models with Tables☆24Updated 3 years ago
- ReaSCAN is a synthetic navigation task that requires models to reason about surroundings over syntactically difficult languages. (NeurIPS…☆19Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 6 years ago