Prune a model while finetuning or training.
☆406Jun 21, 2022Updated 3 years ago
Alternatives and similar repositories for nn_pruning
Users that are interested in nn_pruning are comparing it to the libraries listed below
Sorting:
- Block Sparse movement pruning☆83Nov 26, 2020Updated 5 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- MLPruning, PyTorch, NLP, BERT, Structured Pruning☆20Jun 29, 2021Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 4 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198May 9, 2023Updated 2 years ago
- ☆13Mar 27, 2020Updated 5 years ago
- ☆87Jun 2, 2022Updated 3 years ago
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆85Feb 1, 2026Updated last month
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,325Mar 13, 2026Updated last week
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Jun 12, 2023Updated 2 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Oct 23, 2024Updated last year
- New dataset☆311Aug 31, 2021Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Aug 17, 2022Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Jul 11, 2024Updated last year
- Fast Block Sparse Matrices for Pytorch☆549Jan 21, 2021Updated 5 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Feb 28, 2023Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Dec 20, 2023Updated 2 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,563Updated this week
- python project template for personal projects! 🙋♀️☆11Nov 28, 2020Updated 5 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Jun 9, 2021Updated 4 years ago
- A Visual Analysis Tool to Explore Learned Representations in Transformers Models☆604Feb 7, 2024Updated 2 years ago
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆589Apr 24, 2023Updated 2 years ago
- Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.☆1,750Dec 20, 2023Updated 2 years ago
- Highly specialized crate to parse and use `google/sentencepiece` 's precompiled_charsmap in `tokenizers`☆21Jan 8, 2026Updated 2 months ago
- Shared repository for open-sourced projects from the Google AI Language team.☆1,760Updated this week
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Jul 14, 2024Updated last year
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,804Mar 1, 2026Updated 2 weeks ago
- Longformer: The Long-Document Transformer☆2,189Feb 8, 2023Updated 3 years ago
- Distillation of BERT model with catalyst framework☆78Jun 12, 2023Updated 2 years ago
- ☆30Sep 27, 2021Updated 4 years ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,267Sep 7, 2025Updated 6 months ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18May 10, 2023Updated 2 years ago
- Autoregressive Entity Retrieval☆796Jul 6, 2023Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Feb 9, 2023Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"☆1,626Jun 12, 2023Updated 2 years ago