princeton-nlp / DataMUX
[NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks
☆60Updated 2 years ago
Alternatives and similar repositories for DataMUX
Users that are interested in DataMUX are comparing it to the libraries listed below
Sorting:
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- GPT, but made only out of MLPs☆88Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- ReaSCAN is a synthetic navigation task that requires models to reason about surroundings over syntactically difficult languages. (NeurIPS…☆20Updated 3 years ago
- PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]☆55Updated 3 years ago
- ☆63Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- ☆47Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- ☆83Updated 9 months ago
- ☆95Updated 2 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆24Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆49Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- ☆38Updated 4 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆44Updated 4 years ago
- ☆45Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Code for "Rissanen Data Analysis: Examining Dataset Characteristics via Description Length" by Ethan Perez, Douwe Kiela, and Kyungyhun Ch…☆36Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Exploring Few-Shot Adaptation of Language Models with Tables☆23Updated 2 years ago