[KDD'22] Learned Token Pruning for Transformers
☆99Feb 27, 2023Updated 3 years ago
Alternatives and similar repositories for LTP
Users that are interested in LTP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆22Aug 21, 2022Updated 3 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48May 25, 2022Updated 3 years ago
- ☆43Jan 30, 2024Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- Block Sparse movement pruning☆83Nov 26, 2020Updated 5 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Feb 28, 2023Updated 3 years ago
- Implementation of a Quantized Transformer Model☆19Mar 20, 2019Updated 7 years ago
- Open Source Projects from Pallas Lab☆21Oct 10, 2021Updated 4 years ago
- ☆21Apr 24, 2022Updated 3 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Dec 1, 2023Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆266Jan 29, 2023Updated 3 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Nov 8, 2021Updated 4 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆64Aug 5, 2024Updated last year
- Code for Neural Execution Engines: Learning to Execute Subroutines☆18Jan 11, 2021Updated 5 years ago
- [NeurIPS 2020] "FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training" by Yonggan Fu, Ha…☆10Feb 13, 2022Updated 4 years ago
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Jan 12, 2022Updated 4 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198May 9, 2023Updated 2 years ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆96Feb 6, 2024Updated 2 years ago
- ☆19Mar 21, 2023Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Jul 25, 2023Updated 2 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Dec 15, 2021Updated 4 years ago
- ☆24Jan 18, 2021Updated 5 years ago
- SKFAC Preconditioner for MindSpore☆12Jul 2, 2021Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Jun 14, 2023Updated 2 years ago
- ☆13Nov 25, 2022Updated 3 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Jul 13, 2022Updated 3 years ago
- Code for TKDE paper: Patient Health Representation Learning via Correlational Sparse Prior of Medical Features.☆11Jan 5, 2023Updated 3 years ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Vision Transformer Pruning☆57Dec 9, 2021Updated 4 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Jul 13, 2022Updated 3 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Jun 9, 2021Updated 4 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Jan 5, 2021Updated 5 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18May 10, 2023Updated 2 years ago
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆165Jul 14, 2022Updated 3 years ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Oct 3, 2024Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Nov 21, 2022Updated 3 years ago
- Prune a model while finetuning or training.☆406Jun 21, 2022Updated 3 years ago