tigert1998 / qatLinks
Manually implemented quantization-aware training
☆23Updated 3 years ago
Alternatives and similar repositories for qat
Users that are interested in qat are comparing it to the libraries listed below
Sorting:
- ☆208Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- ☆68Updated 2 years ago
- ☆169Updated 2 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆57Updated 4 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆56Updated 6 years ago
- Benchmark PyTorch Custom Operators☆14Updated 2 years ago
- PyTorch Quantization Aware Training Example☆150Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆93Updated 3 years ago
- ☆244Updated 3 years ago
- ☆160Updated 2 years ago
- play gemm with tvm☆92Updated 2 years ago
- ☆18Updated 5 years ago
- To deploy Transformer models in CV to mobile devices.☆18Updated 4 years ago
- CUDA Templates for Linear Algebra Subroutines☆101Updated last year
- BitSplit Post-trining Quantization☆50Updated 4 years ago
- ☆40Updated 5 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆105Updated 7 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆142Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆35Updated 2 years ago
- ☆41Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆50Updated last year
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- study of Ampere' Sparse Matmul☆18Updated 5 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year