pytorch / serve
Serve, optimize and scale PyTorch models in production
☆4,290Updated this week
Alternatives and similar repositories for serve:
Users that are interested in serve are comparing it to the libraries listed below
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,742Updated this week
- PyTorch extensions for high performance and large scale training.☆3,259Updated last month
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,534Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,676Updated this week
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,526Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,325Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,385Updated 2 weeks ago
- Model interpretability and understanding for PyTorch☆5,079Updated 3 weeks ago
- High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.☆4,584Updated last month
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆7,374Updated this week
- Open standard for machine learning interoperability☆18,432Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,735Updated last week
- Flax is a neural network library for JAX that is designed for flexibility.