dguo98 / DiffPruning
Parameter Efficient Transfer Learning with Diff Pruning
☆73Updated 4 years ago
Alternatives and similar repositories for DiffPruning:
Users that are interested in DiffPruning are comparing it to the libraries listed below
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆140Updated 3 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- Block Sparse movement pruning☆79Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- [ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, …☆18Updated 3 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆44Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- ☆19Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆61Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆153Updated 3 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- ☆22Updated 2 years ago
- ☆15Updated 3 years ago
- ☆33Updated 4 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆39Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆49Updated 2 years ago
- ☆52Updated last year
- ☆62Updated 3 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆140Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆95Updated 2 years ago
- ☆63Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- ☆29Updated 9 months ago