project-codeflare / zero-copy-model-loadingLinks
In-depth code associated with my Medium blog post, "How to Load PyTorch Models 340 Times Faster with Ray"
☆28Updated 3 years ago
Alternatives and similar repositories for zero-copy-model-loading
Users that are interested in zero-copy-model-loading are comparing it to the libraries listed below
Sorting:
- Simple dependency injection framework for Python☆21Updated last year
- Provide Python access to the NVML library for GPU diagnostics☆245Updated last week
- Module, Model, and Tensor Serialization/Deserialization☆264Updated 3 weeks ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 months ago
- TorchFix - a linter for PyTorch-using code with autofix support☆147Updated 3 weeks ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆387Updated last week
- Python 3 Bindings for the NVIDIA Management Library☆140Updated last year
- FIL backend for the Triton Inference Server☆82Updated this week
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-…☆67Updated 2 years ago
- 🐍 Python bidding for the Hora Approximate Nearest Neighbor Search Algorithm library☆72Updated 3 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- Distributed XGBoost on Ray☆149Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated 2 weeks ago
- Accelerate PyTorch models with ONNX Runtime☆363Updated 6 months ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆123Updated last month
- High-performance safetensors model loader☆55Updated last month
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆296Updated this week
- A collection of reproducible inference engine benchmarks☆32Updated 4 months ago
- benchmarking some transformer deployments☆26Updated 2 years ago
- Unified storage framework for the entire machine learning lifecycle☆155Updated last year
- A library that translates Python and NumPy to optimized distributed systems code.☆132Updated 2 years ago
- Awesome utilities for performance profiling☆188Updated 6 months ago
- Python client for RedisAI☆89Updated 2 years ago
- Plugin for deploying MLflow models to TorchServe☆110Updated 2 years ago
- ☆39Updated this week
- Pytorch Lightning Distributed Accelerators using Ray☆213Updated last year
- MLOps Python Library☆119Updated 3 years ago
- experiments with inference on llama☆104Updated last year
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆106Updated 8 months ago