project-codeflare / zero-copy-model-loadingLinks
In-depth code associated with my Medium blog post, "How to Load PyTorch Models 340 Times Faster with Ray"
☆28Updated 3 years ago
Alternatives and similar repositories for zero-copy-model-loading
Users that are interested in zero-copy-model-loading are comparing it to the libraries listed below
Sorting:
- Provide Python access to the NVML library for GPU diagnostics☆258Updated 4 months ago
- FIL backend for the Triton Inference Server☆87Updated this week
- TorchFix - a linter for PyTorch-using code with autofix support☆152Updated 5 months ago
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆328Updated last week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Updated 3 weeks ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated last month
- The Triton backend for the PyTorch TorchScript models.☆172Updated 2 weeks ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆411Updated last week
- Python bindings for UCX☆139Updated 4 months ago
- Unified storage framework for the entire machine learning lifecycle☆155Updated last year
- Distributed XGBoost on Ray☆152Updated last year
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-…☆67Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated this week
- Productionize machine learning predictions, with ONNX or without☆66Updated 2 years ago
- Pytorch Lightning Distributed Accelerators using Ray☆215Updated 2 years ago
- Torch Distributed Experimental☆117Updated last year
- Plugin for deploying MLflow models to TorchServe☆110Updated 2 years ago
- Simple dependency injection framework for Python☆21Updated last year
- Distributed ML Optimizer☆35Updated 4 years ago
- A library that translates Python and NumPy to optimized distributed systems code.☆131Updated 3 years ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆130Updated last week
- The Triton backend for the ONNX Runtime.☆172Updated 2 weeks ago
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆107Updated last month
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆17Updated 3 years ago
- Ray - A curated list of resources: https://github.com/ray-project/ray☆78Updated 3 months ago
- MLFlow Deployment Plugin for Ray Serve☆46Updated 3 years ago
- benchmarking some transformer deployments☆26Updated last month