pytorch / serveLinks
Serve, optimize and scale PyTorch models in production
☆4,350Updated 2 weeks ago
Alternatives and similar repositories for serve
Users that are interested in serve are comparing it to the libraries listed below
Sorting:
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,644Updated this week
- PyTorch extensions for high performance and large scale training.☆3,361Updated 3 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,841Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,767Updated 2 weeks ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,023Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Updated 9 months ago
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,653Updated this week
- An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models☆4,605Updated this week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆7,996Updated this week
- Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, …☆3,177Updated 5 months ago
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,571Updated 3 weeks ago
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆4,457Updated this week
- Transformer related optimization, including BERT, GPT☆6,274Updated last year
- Tutorials for creating and using ONNX models☆3,586Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,055Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,463Updated last month
- Boosting your Web Services of Deep Learning Applications.☆1,242Updated 4 years ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,484Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,039Updated this week
- An easy to use PyTorch to TensorRT converter☆4,792Updated last year
- Machine learning metrics for distributed, scalable PyTorch applications.☆2,328Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,127Updated last week
- Hummingbird compiles trained ML models into tensor computation for faster inference.☆3,459Updated last month
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,530Updated last month
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆2,766Updated 2 months ago
- Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet f…☆1,851Updated last week
- Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.☆5,754Updated this week
- Flax is a neural network library for JAX that is designed for flexibility.☆6,747Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,136Updated 3 weeks ago
- cuML - RAPIDS Machine Learning Library☆4,874Updated this week