megvii-research / SparsebitLinks
A model compression and acceleration toolbox based on pytorch.
☆331Updated last year
Alternatives and similar repositories for Sparsebit
Users that are interested in Sparsebit are comparing it to the libraries listed below
Sorting:
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆309Updated last year
- Model Quantization Benchmark☆844Updated 6 months ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆353Updated 2 years ago
- Offline Quantization Tools for Deploy.☆141Updated last year
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆404Updated 2 years ago
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆143Updated 3 years ago
- TensorRT Plugin Autogen Tool☆368Updated 2 years ago
- ☆241Updated 2 years ago
- ☆206Updated 3 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆284Updated 4 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆199Updated 2 years ago
- A parser, editor and profiler tool for ONNX models.☆460Updated 3 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- MegEngine到其他框架的转换器☆70Updated 2 years ago
- A simple network quantization demo using pytorch from scratch.☆538Updated 2 years ago
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆223Updated 10 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆451Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆228Updated 3 years ago
- A set of examples around MegEngine☆31Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆317Updated 7 months ago
- A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆599Updated 2 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Updated last year
- ☆228Updated 4 years ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆124Updated last month
- The official implementation of the EMNLP 2023 paper LLM-FP4☆217Updated last year
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆220Updated 2 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago