KumaTea / tensorflow-aarch64Links
TensorFlow wheels (whl) for aarch64 / ARMv8 / ARM64
☆137Updated 2 years ago
Alternatives and similar repositories for tensorflow-aarch64
Users that are interested in tensorflow-aarch64 are comparing it to the libraries listed below
Sorting:
- PyTorch wheels (whl) & conda for aarch64 / ARMv8 / ARM64☆225Updated 2 years ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆341Updated 2 years ago
- Convert tf.keras/Keras models to ONNX☆379Updated 3 years ago
- Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, Media…☆218Updated last year
- Dockerfiles and scripts for ONNX container images☆137Updated 2 years ago
- Convert ONNX model graph to Keras model format.☆202Updated 11 months ago
- Built python wheel files of https://github.com/microsoft/onnxruntime for raspberry pi 32bit linux.☆130Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆202Updated last month
- Accelerate PyTorch models with ONNX Runtime☆363Updated 3 months ago
- Transform ONNX model to PyTorch representation☆337Updated 6 months ago
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆406Updated last month
- Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK…☆506Updated last year
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago
- This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.☆170Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆134Updated this week
- Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI☆372Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆478Updated this week
- PyTorch to TensorFlow Lite converter☆183Updated 10 months ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆624Updated 3 weeks ago
- PyTorch 1.7.0 and torchvision 0.8.0 builds for RaspberryPi 4 (32bit OS)☆96Updated 4 years ago
- Common utilities for ONNX converters☆271Updated 6 months ago
- YOLOv4 Implemented in Tensorflow 2.☆135Updated 3 years ago
- ONNX Optimizer☆717Updated last week
- A profiling and performance analysis tool for machine learning☆387Updated this week
- tfyolo: Efficient Implementation of Yolov5 in TensorFlow☆233Updated last year
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆390Updated this week
- Sample app code for LPR deployment on DeepStream☆219Updated 7 months ago
- Openvino environment with docker☆69Updated 4 years ago
- Save, Load Frozen Graph and Run Inference From Frozen Graph in TensorFlow 1.x and 2.x☆303Updated 4 years ago
- Sample app code for deploying TAO Toolkit trained models to Triton☆87Updated 9 months ago