huggingface / nn_pruning
Prune a model while finetuning or training.
☆400Updated 2 years ago
Alternatives and similar repositories for nn_pruning:
Users that are interested in nn_pruning are comparing it to the libraries listed below
- A library for researching neural networks compression and acceleration methods.☆141Updated 6 months ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated last year
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆331Updated 8 months ago
- Library for 8-bit optimizers and quantization routines.☆717Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆181Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆312Updated last year
- Implementation of a Transformer, but completely in Triton☆260Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆785Updated last year
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆241Updated 2 years ago
- ☆202Updated 3 years ago
- Root Mean Square Layer Normalization☆233Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.☆205Updated last year
- Running BERT without Padding☆472Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆704Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆278Updated 3 weeks ago
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆606Updated 8 months ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆748Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆518Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆432Updated 2 years ago
- Accelerate PyTorch models with ONNX Runtime☆358Updated last month
- Sequence modeling with Mega.☆295Updated 2 years ago
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆578Updated last year
- Block Sparse movement pruning☆79Updated 4 years ago
- GPTQ inference Triton kernel☆298Updated last year
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆609Updated 2 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago