pytorch / ort
Accelerate PyTorch models with ONNX Runtime
☆358Updated 4 months ago
Alternatives and similar repositories for ort:
Users that are interested in ort are comparing it to the libraries listed below
- Examples for using ONNX Runtime for model training.☆322Updated 2 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆179Updated last month
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆304Updated this week
- Implementation of a Transformer, but completely in Triton☆251Updated 2 years ago
- Common utilities for ONNX converters☆256Updated last month
- Scailable ONNX python tools☆96Updated 2 months ago
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆349Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆336Updated last week
- The Triton backend for the PyTorch TorchScript models.☆139Updated this week
- Torch Distributed Experimental☆115Updated 5 months ago
- A GPU performance profiling tool for PyTorch models☆500Updated 3 years ago
- The Triton backend for the ONNX Runtime.☆136Updated this week
- Library for 8-bit optimizers and quantization routines.☆717Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆131Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆152Updated last month
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,016Updated 9 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆446Updated this week
- Prune a model while finetuning or training.☆397Updated 2 years ago
- Actively maintained ONNX Optimizer☆657Updated 10 months ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆193Updated this week
- A code generator from ONNX to PyTorch code☆135Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated last year
- PyTorch RFCs (experimental)☆131Updated 4 months ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆224Updated last month
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆755Updated last week
- High performance model preprocessing library on PyTorch☆651Updated 9 months ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆253Updated 2 years ago
- A library to analyze PyTorch traces.☆323Updated last month
- Transform ONNX model to PyTorch representation☆323Updated 2 months ago