coreweave / tensorizerLinks
Module, Model, and Tensor Serialization/Deserialization
☆272Updated 2 months ago
Alternatives and similar repositories for tensorizer
Users that are interested in tensorizer are comparing it to the libraries listed below
Sorting:
- ☆264Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆446Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆388Updated 5 months ago
- CUDA checkpoint and restore utility☆381Updated last month
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆399Updated last week
- ☆317Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated last year
- A library to analyze PyTorch traces.☆426Updated last week
- High-performance safetensors model loader☆71Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated last month
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆232Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆450Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆317Updated last month
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆151Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆199Updated last week
- ☆218Updated 9 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- ☆42Updated 2 weeks ago
- Inference server benchmarking tool☆128Updated last month
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆307Updated last week
- ☆309Updated last week
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- ☆205Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆83Updated 2 weeks ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆77Updated last month
- ☆145Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆204Updated this week
- Pipeline Parallelism for PyTorch☆781Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆216Updated last year
- ☆337Updated last week