project-codeflare / zero-copy-model-loading
In-depth code associated with my Medium blog post, "How to Load PyTorch Models 340 Times Faster with Ray"
β26Updated 2 years ago
Alternatives and similar repositories for zero-copy-model-loading:
Users that are interested in zero-copy-model-loading are comparing it to the libraries listed below
- Simple dependency injection framework for Pythonβ20Updated 8 months ago
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β43Updated 8 months ago
- TorchFix - a linter for PyTorch-using code with autofix supportβ122Updated 3 weeks ago
- β12Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β153Updated last month
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.β240Updated this week
- Home for OctoML PyTorch Profilerβ107Updated last year
- Module, Model, and Tensor Serialization/Deserializationβ210Updated 2 months ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.β106Updated this week
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.β43Updated 6 months ago
- PyTorch centric eager mode debuggerβ44Updated last month
- Productionize machine learning predictions, with ONNX or withoutβ65Updated last year
- Ray - A curated list of resources: https://github.com/ray-project/rayβ48Updated this week
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.β41Updated 2 months ago
- Torch Distributed Experimentalβ115Updated 5 months ago
- π Python bidding for the Hora Approximate Nearest Neighbor Search Algorithm libraryβ68Updated 3 years ago
- experiments with inference on llamaβ104Updated 7 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ66Updated this week
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/geluβ48Updated last month
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.β18Updated 2 years ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)β28Updated last year
- Lightning HPO & Training Studio Appβ18Updated last year
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-β¦β66Updated last year
- Make triton easierβ44Updated 7 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β178Updated last month
- β21Updated 3 months ago
- The Triton backend for the PyTorch TorchScript models.β141Updated last week
- Context Manager to profile the forward and backward times of PyTorch's nn.Moduleβ84Updated last year
- Hacks for PyTorchβ18Updated last year
- Some microbenchmarks and design docs before commencementβ12Updated 3 years ago