huggingface / nn_pruning
Prune a model while finetuning or training.
☆394Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for nn_pruning
- Library for 8-bit optimizers and quantization routines.☆714Updated 2 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆330Updated 4 months ago
- FastFormers - highly efficient transformer models for NLU☆701Updated 10 months ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆229Updated last year
- A library for researching neural networks compression and acceleration methods.☆136Updated 2 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆778Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆192Updated last year
- Fast Block Sparse Matrices for Pytorch☆545Updated 3 years ago
- Running BERT without Padding☆460Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆171Updated last year
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆566Updated last year
- ☆195Updated 3 years ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆980Updated 3 months ago
- Sequence modeling with Mega.☆298Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆611Updated last year
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆302Updated 3 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆507Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆254Updated 2 months ago
- Accelerate PyTorch models with ONNX Runtime☆356Updated 2 months ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆361Updated 7 months ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆735Updated this week
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆234Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.☆205Updated last year
- Code for the paper "Are Sixteen Heads Really Better than One?"☆169Updated 4 years ago
- Pipeline Parallelism for PyTorch☆726Updated 2 months ago
- ☆194Updated last year
- A GPU performance profiling tool for PyTorch models☆495Updated 3 years ago