ModelTC / NARTLinks
NART = NART is not A RunTime, a deep learning inference framework.
☆37Updated 2 years ago
Alternatives and similar repositories for NART
Users that are interested in NART are comparing it to the libraries listed below
Sorting:
- ☆140Updated last year
- ☆152Updated 11 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 3 months ago
- ☆11Updated 11 months ago
- ☆60Updated last year
- ☆98Updated 4 years ago
- ☆38Updated last year
- Offline Quantization Tools for Deploy.☆141Updated last year
- ☆37Updated 3 years ago
- ☆44Updated 4 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- play gemm with tvm☆92Updated 2 years ago
- code reading for tvm☆76Updated 3 years ago
- ☆119Updated 8 months ago
- OneFlow->ONNX☆43Updated 2 years ago
- ☆243Updated 3 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆150Updated 3 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- ☆130Updated 11 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Updated 2 years ago
- heterogeneity-aware-lowering-and-optimization☆257Updated last year
- symmetric int8 gemm☆67Updated 5 years ago
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago
- ☆167Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆223Updated 2 years ago
- ☆102Updated last year