coreweave / tensorizerLinks
Module, Model, and Tensor Serialization/Deserialization
☆240Updated last week
Alternatives and similar repositories for tensorizer
Users that are interested in tensorizer are comparing it to the libraries listed below
Sorting:
- ☆221Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆177Updated 2 weeks ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆349Updated last week
- CUDA checkpoint and restore utility☆345Updated 4 months ago
- PyTorch per step fault tolerance (actively under development)☆329Updated this week
- ☆36Updated this week
- ☆310Updated 10 months ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆204Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- ☆49Updated 3 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆367Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆413Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated 10 months ago
- Getting Started with the CoreWeave Kubernetes GPU Cloud☆72Updated last week
- Google TPU optimizations for transformers models☆113Updated 5 months ago
- NVIDIA NCCL Tests for Distributed Training☆97Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆126Updated this week
- The Triton backend for the ONNX Runtime.☆152Updated this week
- A library to analyze PyTorch traces.☆391Updated this week
- ☆194Updated last month
- Fast low-bit matmul kernels in Triton☆322Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆61Updated 2 months ago
- ☆141Updated 2 weeks ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- High-performance safetensors model loader☆39Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆152Updated this week
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆123Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- ☆267Updated last week