shawnricecake / edge-qat
Official Repo for EdgeQAT
☆14Updated 6 months ago
Alternatives and similar repositories for edge-qat
Users that are interested in edge-qat are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆49Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆58Updated last month
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆36Updated 7 months ago
- ☆22Updated last month
- ☆20Updated 6 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- ☆51Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- ☆21Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated 11 months ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Updated last year
- ☆10Updated last year
- ☆15Updated 2 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆47Updated 5 months ago
- This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Qu…☆14Updated 2 years ago
- LLM Inference with Microscaling Format☆22Updated 6 months ago
- ☆18Updated 5 months ago
- Official PyTorch implementation of "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact"☆44Updated 11 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆37Updated 10 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆32Updated 4 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆36Updated last year
- ☆42Updated last year
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆45Updated last month
- Are gradient information useful for pruning of LLMs?☆45Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆49Updated last year
- ☆28Updated 9 months ago
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆23Updated 4 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆86Updated 5 months ago