A model compression and acceleration toolbox based on pytorch.
☆333Jan 12, 2024Updated 2 years ago
Alternatives and similar repositories for Sparsebit
Users that are interested in Sparsebit are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)☆73Mar 15, 2023Updated 2 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- MM2022 Workshop-Perceptual Conversational Head Generation with Regularized Driver and Enhanced Renderer☆55May 16, 2024Updated last year
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆359Apr 11, 2023Updated 2 years ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,785Mar 28, 2024Updated last year
- Model Quantization Benchmark☆858Apr 20, 2025Updated 10 months ago
- PyTorch implementation of US3L (Accepted to CVPR2023)☆33Mar 15, 2023Updated 2 years ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Nov 22, 2022Updated 3 years ago
- ☆14Feb 3, 2022Updated 4 years ago
- Slides with modifications for a course at Tsinghua University.☆64Aug 17, 2022Updated 3 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,612Jul 12, 2024Updated last year
- Tengine 管子是用来快速生产 demo 的辅助工具☆12Jul 15, 2021Updated 4 years ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- MegCC是一个运行时超轻量, 高效,移植简单的深度学习模型编译器☆486Oct 23, 2024Updated last year
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆453May 15, 2023Updated 2 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,261Mar 27, 2024Updated last year
- Post-Training Quantization for Vision transformers.☆238Jul 19, 2022Updated 3 years ago
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆866Dec 24, 2025Updated 2 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,565Updated this week
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Jan 20, 2019Updated 7 years ago
- ☆45Jul 14, 2021Updated 4 years ago
- Offline Quantization Tools for Deploy.☆142Dec 28, 2023Updated 2 years ago
- MegEngine到其他框架的转换器☆69Apr 27, 2023Updated 2 years ago
- ☆156Jun 22, 2023Updated 2 years ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,443Jul 17, 2025Updated 7 months ago
- ☆19Mar 16, 2022Updated 3 years ago
- ☆23Jan 3, 2024Updated 2 years ago
- ☆79Jul 21, 2022Updated 3 years ago
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- A primitive library for neural network☆1,366Nov 24, 2024Updated last year
- A simple network quantization demo using pytorch from scratch.☆542Jun 18, 2023Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆290Aug 1, 2021Updated 4 years ago
- LLaMa/RWKV onnx models, quantization and testcase☆366Jul 6, 2023Updated 2 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆408Nov 22, 2022Updated 3 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆73Jul 7, 2022Updated 3 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,269May 6, 2025Updated 9 months ago
- A set of examples around MegEngine☆31Dec 8, 2023Updated 2 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆178Feb 19, 2026Updated last week