coreweave / tensorizerLinks
Module, Model, and Tensor Serialization/Deserialization
☆283Updated 4 months ago
Alternatives and similar repositories for tensorizer
Users that are interested in tensorizer are comparing it to the libraries listed below
Sorting:
- ☆275Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆467Updated 2 weeks ago
- CUDA checkpoint and restore utility☆401Updated 3 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆411Updated this week
- ☆320Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆398Updated last week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆247Updated this week
- High-performance safetensors model loader☆92Updated 3 weeks ago
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆233Updated this week
- A library to analyze PyTorch traces.☆454Updated 3 weeks ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆204Updated this week
- ☆322Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆468Updated this week
- The Triton backend for the PyTorch TorchScript models.☆170Updated this week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated last month
- Load compute kernels from the Hub☆359Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆278Updated last month
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆214Updated 8 months ago
- ☆206Updated 8 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆190Updated this week
- Inference server benchmarking tool☆136Updated 3 months ago
- The Triton backend for the ONNX Runtime.☆170Updated last week
- A tool to configure, launch and manage your machine learning experiments.☆213Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆355Updated this week
- Where GPUs get cooked 👩🍳🔥☆347Updated 3 months ago