pytorch / ort
Accelerate PyTorch models with ONNX Runtime
☆358Updated 3 weeks ago
Alternatives and similar repositories for ort:
Users that are interested in ort are comparing it to the libraries listed below
- Examples for using ONNX Runtime for model training.☆329Updated 4 months ago
- Common utilities for ONNX converters☆259Updated 3 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆323Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆351Updated this week
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- The Triton backend for the ONNX Runtime.☆139Updated last week
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆179Updated 3 months ago
- Library for 8-bit optimizers and quantization routines.☆718Updated 2 years ago
- A GPU performance profiling tool for PyTorch models☆505Updated 3 years ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆366Updated this week
- Implementation of a Transformer, but completely in Triton☆260Updated 2 years ago
- Scailable ONNX python tools☆97Updated 4 months ago
- Torch Distributed Experimental☆115Updated 7 months ago
- ONNX Optimizer☆681Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆155Updated 3 months ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆229Updated 2 months ago
- Prune a model while finetuning or training.☆400Updated 2 years ago
- The Triton backend for the PyTorch TorchScript models.☆144Updated last week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,035Updated 11 months ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆132Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆462Updated last week
- Transform ONNX model to PyTorch representation☆328Updated 4 months ago
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆780Updated this week
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆198Updated 2 months ago
- A code generator from ONNX to PyTorch code☆135Updated 2 years ago
- Provide Python access to the NVML library for GPU diagnostics☆226Updated 3 months ago
- PyTorch RFCs (experimental)☆130Updated 6 months ago
- Model compression for ONNX☆87Updated 4 months ago
- A library to analyze PyTorch traces.☆348Updated last week