huggingface / nn_pruningLinks
Prune a model while finetuning or training.
☆404Updated 3 years ago
Alternatives and similar repositories for nn_pruning
Users that are interested in nn_pruning are comparing it to the libraries listed below
Sorting:
- Library for 8-bit optimizers and quantization routines.☆779Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆707Updated 5 months ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆790Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆274Updated 3 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆314Updated 2 years ago
- A library for researching neural networks compression and acceleration methods.☆139Updated 2 weeks ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆335Updated last year
- Fast Block Sparse Matrices for Pytorch☆549Updated 4 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆432Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- Root Mean Square Layer Normalization☆254Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆191Updated 2 years ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆209Updated 2 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Updated 3 months ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆587Updated 2 years ago
- ☆252Updated last year
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆543Updated last year
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆408Updated last year
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆257Updated 2 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆226Updated 3 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆159Updated 3 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Accelerate PyTorch models with ONNX Runtime☆364Updated 6 months ago