mlpen / YOSO
☆18Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for YOSO
- Parameter Efficient Transfer Learning with Diff Pruning☆72Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago
- ☆81Updated 3 months ago
- ☆32Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆95Updated last year
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆56Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆59Updated 2 years ago
- ☆21Updated last year
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆66Updated last year
- ☆65Updated 2 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆100Updated 3 years ago
- Code to reproduce the results for Compositional Attention☆60Updated 2 years ago
- ☆93Updated last year
- Pytorch library for factorized L0-based pruning.☆43Updated last year
- ☆36Updated 4 years ago
- ☆15Updated 3 years ago
- ☆77Updated 3 months ago
- Efficient Transformers with Dynamic Token Pooling☆54Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆26Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- ☆126Updated 2 years ago
- Implementation of QKVAE☆11Updated last year
- ☆69Updated 8 months ago
- N/A☆18Updated 2 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆72Updated 3 months ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆138Updated 2 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆87Updated 3 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆110Updated 8 months ago