AboveParadise / LLMCBench
☆10Updated last week
Related projects ⓘ
Alternatives and complementary repositories for LLMCBench
- Code Repository of Evaluating Quantized Large Language Models☆103Updated 2 months ago
- ☆80Updated 11 months ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆12Updated 2 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆12Updated 4 months ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆112Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆46Updated 2 years ago
- (ICCV 2023) Official implementation of Rectified Straight Through Estimator (ReSTE).☆25Updated last month
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆36Updated this week
- ☆20Updated last week
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆101Updated last month
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆152Updated last week
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆53Updated 8 months ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆54Updated 5 months ago
- ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆31Updated 2 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆42Updated 7 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆51Updated 4 months ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆27Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆49Updated last year
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆82Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆10Updated this week
- Awesome list for LLM pruning.☆159Updated last month
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆42Updated last year
- DeiT implementation for Q-ViT☆23Updated 2 years ago
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models☆17Updated 11 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆75Updated 2 months ago
- ☆18Updated 2 years ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆81Updated 5 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆102Updated last year
- Post-Training Quantization for Vision transformers.☆189Updated 2 years ago
- ☆38Updated 7 months ago