ModelTC / NART
NART = NART is not A RunTime, a deep learning inference framework.
☆38Updated last year
Related projects ⓘ
Alternatives and complementary repositories for NART
- Offline Quantization Tools for Deploy.☆116Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- ☆57Updated this week
- ☆140Updated 7 months ago
- Inference of quantization aware trained networks using TensorRT☆79Updated last year
- ☆34Updated 2 years ago
- ☆44Updated 3 years ago
- A set of examples around MegEngine☆31Updated 11 months ago
- ☆138Updated 2 weeks ago
- ☆79Updated last year
- play gemm with tvm☆84Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆195Updated 2 years ago
- ☆32Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆89Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- MegEngine到其他框架的转换器☆67Updated last year
- ☆214Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆254Updated 3 years ago
- ☆35Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆95Updated 3 years ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆113Updated last year
- ☆93Updated 3 years ago
- symmetric int8 gemm☆66Updated 4 years ago
- ☆67Updated last year
- code reading for tvm☆71Updated 2 years ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- Post-Training Quantization for Vision transformers.☆191Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆29Updated 2 months ago