yaozhewei / MLPruningLinks
MLPruning, PyTorch, NLP, BERT, Structured Pruning
☆20Updated 4 years ago
Alternatives and similar repositories for MLPruning
Users that are interested in MLPruning are comparing it to the libraries listed below
Sorting:
- ☆221Updated 2 years ago
- ☆10Updated 3 years ago
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆101Updated 2 years ago
- Block Sparse movement pruning☆81Updated 5 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated 2 years ago
- ☆43Updated last year
- [ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wa…☆33Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆54Updated last year
- ☆16Updated 4 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆95Updated last year
- ☆62Updated 2 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆89Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- ☆41Updated 4 years ago
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆113Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 4 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- [ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang☆19Updated 4 years ago