project-codeflare / zero-copy-model-loadingLinks
In-depth code associated with my Medium blog post, "How to Load PyTorch Models 340 Times Faster with Ray"
β28Updated 3 years ago
Alternatives and similar repositories for zero-copy-model-loading
Users that are interested in zero-copy-model-loading are comparing it to the libraries listed below
Sorting:
- Simple dependency injection framework for Pythonβ21Updated last year
- π Python bidding for the Hora Approximate Nearest Neighbor Search Algorithm libraryβ73Updated 4 years ago
- TorchFix - a linter for PyTorch-using code with autofix supportβ151Updated 3 months ago
- Provide Python access to the NVML library for GPU diagnosticsβ251Updated 3 months ago
- A collection of reproducible inference engine benchmarksβ38Updated 7 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.β319Updated this week
- Distributed XGBoost on Rayβ152Updated last year
- Module, Model, and Tensor Serialization/Deserializationβ277Updated 3 months ago
- FIL backend for the Triton Inference Serverβ83Updated last week
- High-performance safetensors model loaderβ79Updated 3 weeks ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.β69Updated 7 months ago
- The Triton backend for the PyTorch TorchScript models.β166Updated this week
- Awesome utilities for performance profilingβ197Updated 9 months ago
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β107Updated 2 weeks ago
- A memory efficient DLRM training solution using ColossalAIβ106Updated 3 years ago
- Unified storage framework for the entire machine learning lifecycleβ155Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β161Updated 2 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β182Updated 3 months ago
- Plugin for deploying MLflow models to TorchServeβ110Updated 2 years ago
- Python bindings for ggmlβ146Updated last year
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.β125Updated 2 months ago
- π Interactive performance profiling and debugging tool for PyTorch neural networks.β64Updated 10 months ago
- Productionize machine learning predictions, with ONNX or withoutβ66Updated last year
- experiments with inference on llamaβ103Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.β46Updated last year
- Some microbenchmarks and design docs before commencementβ12Updated 4 years ago
- ML/DL Math and Method notesβ64Updated 2 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β47Updated last year
- Python 3 Bindings for the NVIDIA Management Libraryβ141Updated last year
- ClearML - Model-Serving Orchestration and Repository Solutionβ159Updated last month