microsoft / DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
☆2,420Updated this week
Alternatives and similar repositories for DirectML:
Users that are interested in DirectML are comparing it to the libraries listed below
- Fork of TensorFlow accelerated by DirectML☆466Updated 6 months ago
- DirectML PluggableDevice plugin for TensorFlow 2☆193Updated last month
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆1,862Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,828Updated this week
- AMD's Machine Intelligence Library☆1,143Updated this week
- HIPIFY: Convert CUDA to Portable C++ Code☆571Updated this week
- Intel® Extension for TensorFlow*☆336Updated last month
- Intel® NPU Acceleration Library☆666Updated 3 months ago
- ONNXMLTools enables conversion of models to ONNX☆1,071Updated 3 months ago
- Dockerfiles for the various software layers defined in the ROCm software platform☆460Updated 2 months ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,351Updated this week
- ☆505Updated 2 weeks ago
- OpenCL SDK☆642Updated last week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,402Updated 2 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆543Updated last month
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,100Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,103Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆337Updated this week
- An Open Source Machine Learning Framework for Everyone☆1,128Updated 6 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆459Updated this week
- Examples for using ONNX Runtime for model training.☆332Updated 5 months ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,498Updated this week
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆223Updated this week
- TensorFlow ROCm port☆690Updated this week
- CUDA Core Compute Libraries☆1,600Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,725Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,772Updated this week
- ONNX Optimizer☆693Updated 2 weeks ago
- Samples for Intel® oneAPI Toolkits☆1,020Updated 2 weeks ago
- DirectStorage for Windows is an API that allows game developers to unlock the full potential of high speed NVMe drives for loading game a…☆758Updated last week