kentaroy47 / benchmark-FP32-FP16-INT8-with-TensorRT
View external linksLinks

Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
56May 31, 2023Updated 2 years ago

Alternatives and similar repositories for benchmark-FP32-FP16-INT8-with-TensorRT

Users that are interested in benchmark-FP32-FP16-INT8-with-TensorRT are comparing it to the libraries listed below

Sorting:

Are these results useful?