tensorflow / runtime
A performant and modular runtime for TensorFlow
☆756Updated last month
Related projects ⓘ
Alternatives and complementary repositories for runtime
- ☆399Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆692Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,210Updated this week
- common in-memory tensor structure☆912Updated last month
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆962Updated 2 months ago
- Guide for building custom op for TensorFlow☆378Updated last year
- Dive into Deep Learning Compiler☆643Updated 2 years ago
- Collective communications library with various primitives for multi-machine training.☆1,227Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆767Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,737Updated 3 years ago
- TVM integration into PyTorch☆452Updated 4 years ago
- Backward compatible ML compute opset inspired by HLO/MHLO☆412Updated this week
- A tensor-aware point-to-point communication primitive for machine learning☆249Updated last year
- nGraph has moved to OpenVINO☆1,352Updated 4 years ago
- TensorFlow-nGraph bridge☆137Updated 3 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,528Updated 5 years ago
- ☆303Updated last week
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆816Updated this week
- A profiling and performance analysis tool for TensorFlow☆360Updated this week
- Low-precision matrix multiplication☆1,780Updated 9 months ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,355Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆1,885Updated this week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆532Updated 2 years ago
- Explore the Capabilities of the TensorRT Platform☆260Updated 3 years ago
- PyTorch elastic training☆730Updated 2 years ago
- Tensorflow Backend for ONNX☆1,284Updated 7 months ago
- TensorFlow/TensorRT integration☆736Updated 11 months ago
- Reference implementations of MLPerf™ inference benchmarks☆1,238Updated this week
- Facebook AI Performance Evaluation Platform☆388Updated 2 months ago
- heterogeneity-aware-lowering-and-optimization☆253Updated 10 months ago