KumaTea / tensorflow-aarch64
TensorFlow wheels (whl) for aarch64 / ARMv8 / ARM64
☆137Updated 2 years ago
Alternatives and similar repositories for tensorflow-aarch64
Users that are interested in tensorflow-aarch64 are comparing it to the libraries listed below
Sorting:
- PyTorch wheels (whl) & conda for aarch64 / ARMv8 / ARM64☆224Updated 2 years ago
- Dockerfiles and scripts for ONNX container images☆137Updated 2 years ago
- Accelerate PyTorch models with ONNX Runtime☆359Updated 2 months ago
- Convert tf.keras/Keras models to ONNX☆378Updated 3 years ago
- TensorFlow/TensorRT integration☆742Updated last year
- Scailable ONNX python tools☆97Updated 6 months ago
- ONNX Optimizer☆707Updated 2 weeks ago
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆341Updated 2 years ago
- PyTorch to TensorFlow Lite converter☆183Updated 9 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆476Updated 3 weeks ago
- Common utilities for ONNX converters☆269Updated 5 months ago
- Convert ONNX model graph to Keras model format.☆202Updated 10 months ago
- Script to typecast ONNX model parameters from INT64 to INT32.☆107Updated last year
- This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.☆170Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆134Updated this week
- Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI☆372Updated 2 years ago
- TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.☆405Updated last month
- Examples using TensorFlow Lite API to run inference on Coral devices☆187Updated 9 months ago
- Share PyTorch binaries built for Raspberry Pi☆89Updated 3 years ago
- Sample app code for deploying TAO Toolkit trained models to Triton☆87Updated 8 months ago
- Easy to use Python camera interface for NVIDIA Jetson☆440Updated 4 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆284Updated 2 years ago
- Parse TFLite models (*.tflite) EASILY with Python. Check the API at https://zhenhuaw.me/tflite/docs/☆98Updated 3 months ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆92Updated 6 months ago
- Productionize machine learning predictions, with ONNX or without☆65Updated last year
- Face Recognition on NVIDIA Jetson (Nano) using TensorRT☆214Updated 5 months ago
- The Triton backend for the ONNX Runtime.☆145Updated this week
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆793Updated this week
- Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK…☆506Updated last year
- Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, Media…☆216Updated last year