han-shi / SparseBERTLinks
☆13Updated 2 years ago
Alternatives and similar repositories for SparseBERT
Users that are interested in SparseBERT are comparing it to the libraries listed below
Sorting:
- [KDD'22] Learned Token Pruning for Transformers☆99Updated 2 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated 2 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- ☆59Updated last year
- [ACL 2024] Official PyTorch implementation of "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact"☆47Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆52Updated 9 months ago
- ☆21Updated 2 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆59Updated last year
- ☆42Updated 2 years ago
- AFPQ code implementation☆22Updated last year
- ☆24Updated 9 months ago
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆12Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆191Updated 2 years ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- An end-to-end benchmark suite of multi-modal DNN applications for system-architecture co-design☆22Updated 8 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆57Updated 2 years ago
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆21Updated 3 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- A curated list of Early Exiting papers, benchmarks, and misc.☆117Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- ☆33Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Updated 11 months ago
- ☆18Updated 8 months ago
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆17Updated 2 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViT☆33Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- Official implementation of "DPad: Efficient Diffusion Language Models with Suffix Dropout"☆27Updated last week
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆42Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago