mlpen / YOSO
☆18Updated 3 years ago
Related projects: ⓘ
- Parameter Efficient Transfer Learning with Diff Pruning☆70Updated 3 years ago
- ☆80Updated last month
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆42Updated last year
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆58Updated 2 years ago
- Code to reproduce the results for Compositional Attention☆60Updated last year
- ☆32Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆60Updated 2 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆83Updated 2 years ago
- ☆75Updated last month
- Efficient Transformers with Dynamic Token Pooling☆51Updated last year
- ☆127Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆36Updated 3 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆38Updated last year
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆54Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆68Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆78Updated last year
- FairSeq repo with Apollo optimizer☆108Updated 8 months ago
- ☆65Updated 3 weeks ago
- ☆20Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 2 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆39Updated last year
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 4 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆77Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆97Updated 3 years ago
- [NeurIPS2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆28Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆94Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆30Updated last year
- ☆41Updated 2 months ago