tigert1998 / qatLinks
Manually implemented quantization-aware training
☆21Updated 2 years ago
Alternatives and similar repositories for qat
Users that are interested in qat are comparing it to the libraries listed below
Sorting:
- PyTorch Quantization Aware Training Example☆136Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last month
- play gemm with tvm☆91Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- Inference of quantization aware trained networks using TensorRT☆82Updated 2 years ago
- ☆69Updated 2 years ago
- CUDA Templates for Linear Algebra Subroutines☆100Updated last year
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- ☆149Updated 2 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- ☆40Updated 3 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆56Updated 5 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- ☆205Updated 3 years ago
- A Winograd Minimal Filter Implementation in CUDA☆25Updated 3 years ago
- ☆18Updated 4 years ago
- ☆21Updated 4 years ago
- ☆236Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 4 months ago
- FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch☆34Updated 3 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- ☆36Updated 2 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- pytorch-profiler☆51Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆36Updated 2 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆69Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- quantize aware training package for NCNN on pytorch☆70Updated 3 years ago
- ☆44Updated 3 years ago