pytorch / serveLinks
Serve, optimize and scale PyTorch models in production
☆4,359Updated 4 months ago
Alternatives and similar repositories for serve
Users that are interested in serve are comparing it to the libraries listed below
Sorting:
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,153Updated this week
- PyTorch extensions for high performance and large scale training.☆3,392Updated 7 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,907Updated last week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated last year
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,729Updated last week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,882Updated this week
- Transformer related optimization, including BERT, GPT☆6,370Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,398Updated last week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,643Updated 3 weeks ago
- Machine learning metrics for distributed, scalable PyTorch applications.☆2,383Updated this week
- An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models☆4,703Updated this week
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,301Updated 2 years ago
- Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.☆5,926Updated this week
- Tutorials for creating and using ONNX models☆3,636Updated last year
- High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.☆4,724Updated last week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆4,924Updated last week
- Hummingbird compiles trained ML models into tensor computation for faster inference.☆3,511Updated 5 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,225Updated last week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,580Updated this week
- cuML - RAPIDS Machine Learning Library☆5,063Updated this week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆8,315Updated last week
- Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet f…☆1,869Updated last week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,500Updated 3 months ago
- An easy to use PyTorch to TensorRT converter☆4,839Updated last year
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆2,928Updated 6 months ago
- Toolbox of models, callbacks, and datasets for AI/ML researchers.☆1,755Updated 3 weeks ago
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.☆1,245Updated this week
- Boosting your Web Services of Deep Learning Applications.☆1,245Updated 4 years ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,695Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,489Updated 2 weeks ago