megvii-research / Sparsebit
A model compression and acceleration toolbox based on pytorch.
☆331Updated last year
Alternatives and similar repositories for Sparsebit:
Users that are interested in Sparsebit are comparing it to the libraries listed below
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆333Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆202Updated last year
- ☆229Updated 2 years ago
- Reorder-based post-training quantization for large language model☆187Updated last year
- LLaMa/RWKV onnx models, quantization and testcase☆361Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆284Updated last month
- Model Quantization Benchmark☆800Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,394Updated 9 months ago
- ☆203Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆118Updated last year
- Post-Training Quantization for Vision transformers.☆215Updated 2 years ago
- ☆143Updated 2 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆197Updated last year
- ☆147Updated last year
- Offline Quantization Tools for Deploy.☆127Updated last year
- Pytorch implementation of BRECQ, ICLR 2021☆272Updated 3 years ago
- MegEngine到其他框架的转换器☆69Updated last year
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆461Updated this week
- TensorRT Plugin Autogen Tool☆370Updated 2 years ago
- ☆139Updated last year
- llm-export can export llm model to onnx.☆282Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆116Updated 2 weeks ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆431Updated last year
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆222Updated 4 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆472Updated last year
- A parser, editor and profiler tool for ONNX models.☆426Updated 3 months ago
- Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆123Updated last month
- Microsoft Automatic Mixed Precision Library☆595Updated 6 months ago
- GPTQ inference Triton kernel☆300Updated last year
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆302Updated 7 months ago