WoosukKwon / retraining-free-pruningLinks
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
☆192Updated 2 years ago
Alternatives and similar repositories for retraining-free-pruning
Users that are interested in retraining-free-pruning are comparing it to the libraries listed below
Sorting:
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆322Updated 10 months ago
- [KDD'22] Learned Token Pruning for Transformers☆102Updated 2 years ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆96Updated last year
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆89Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆236Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆267Updated 2 years ago
- ☆63Updated 2 years ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆76Updated 6 months ago
- ☆11Updated 2 years ago
- ☆56Updated last year
- ☆208Updated 4 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆67Updated 2 years ago
- ☆43Updated 3 years ago
- ☆157Updated 2 years ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆69Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆219Updated 2 years ago
- Awesome list for LLM pruning.☆280Updated 3 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆142Updated 5 months ago
- ☆243Updated 3 years ago
- ☆222Updated 2 years ago
- ☆21Updated last year
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆114Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198Updated 2 years ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆216Updated 11 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆132Updated last year
- ☆83Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Updated 6 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Updated last month