ModelTC / NARTLinks
NART = NART is not A RunTime, a deep learning inference framework.
☆37Updated 2 years ago
Alternatives and similar repositories for NART
Users that are interested in NART are comparing it to the libraries listed below
Sorting:
- ☆139Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated last month
- ☆59Updated 11 months ago
- Offline Quantization Tools for Deploy.☆140Updated last year
- ☆150Updated 9 months ago
- ☆11Updated 9 months ago
- ☆37Updated last year
- ☆98Updated 4 years ago
- ☆109Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 8 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆76Updated last year
- ☆241Updated 2 years ago
- OneFlow->ONNX☆43Updated 2 years ago
- play gemm with tvm☆92Updated 2 years ago
- ☆44Updated 4 years ago
- ☆36Updated 3 years ago
- code reading for tvm☆76Updated 3 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆47Updated 2 years ago
- ☆100Updated last year
- ☆129Updated 10 months ago
- heterogeneity-aware-lowering-and-optimization☆256Updated last year
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆220Updated 2 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- symmetric int8 gemm☆67Updated 5 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆56Updated 3 years ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Updated last year
- CUDA Templates for Linear Algebra Subroutines☆100Updated last year
- ☆141Updated last year