run-ai / runai-model-streamerLinks
☆267Updated last week
Alternatives and similar repositories for runai-model-streamer
Users that are interested in runai-model-streamer are comparing it to the libraries listed below
Sorting:
- Module, Model, and Tensor Serialization/Deserialization☆276Updated 3 months ago
- Inference server benchmarking tool☆130Updated 2 months ago
- ☆316Updated last year
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆322Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆454Updated 3 weeks ago
- CUDA checkpoint and restore utility☆393Updated 2 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆730Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆456Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- High-performance safetensors model loader☆76Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆392Updated 5 months ago
- ☆317Updated last week
- A Lossless Compression Library for AI pipelines☆288Updated 5 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆79Updated last year
- A tool to configure, launch and manage your machine learning experiments.☆208Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 2 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆691Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated 2 weeks ago
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Where GPUs get cooked 👩🍳🔥☆319Updated 2 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- Common recipes to run vLLM☆245Updated last week
- Load compute kernels from the Hub☆337Updated last week
- NVIDIA NCCL Tests for Distributed Training☆126Updated 3 weeks ago
- 👷 Build compute kernels☆190Updated this week
- ☆42Updated last week