princeton-nlp / DataMUXLinks
[NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks
☆60Updated 3 years ago
Alternatives and similar repositories for DataMUX
Users that are interested in DataMUX are comparing it to the libraries listed below
Sorting:
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- ☆96Updated 3 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 7 months ago
- ☆19Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated 2 years ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆41Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]☆56Updated 4 years ago
- ☆13Updated 4 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Updated 4 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 4 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Updated 7 months ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆25Updated 5 years ago
- ☆56Updated 2 years ago
- Block Sparse movement pruning☆81Updated 5 years ago
- Blog post☆17Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago