han-shi / SparseBERT
☆13Updated 2 years ago
Alternatives and similar repositories for SparseBERT:
Users that are interested in SparseBERT are comparing it to the libraries listed below
- [KDD'22] Learned Token Pruning for Transformers☆96Updated 2 years ago
- Official PyTorch implementation of IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact☆43Updated 10 months ago
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆21Updated 2 years ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆45Updated 4 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆45Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 4 years ago
- ☆19Updated 5 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated last year
- ☆21Updated 4 months ago
- ☆29Updated last year
- LLM Inference with Microscaling Format☆20Updated 4 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆33Updated 6 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆31Updated last month
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆17Updated 2 years ago
- ☆20Updated last year
- ☆19Updated 3 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- ☆12Updated 11 months ago
- ☆50Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆14Updated last year
- Source code for IJCAI 2022 Long paper: Parameter-Efficient Sparsity for Large Language Models Fine-Tuning.☆13Updated 2 years ago
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆12Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViT☆30Updated last year
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- ☆42Updated 2 years ago
- MLPruning, PyTorch, NLP, BERT, Structured Pruning☆21Updated 3 years ago