qoofyk / LLM_Sizing_GuideLinks
A calculator to estimate the memory footprint, capacity, and latency on VMware Private AI with NVIDIA.
☆37Updated 4 months ago
Alternatives and similar repositories for LLM_Sizing_Guide
Users that are interested in LLM_Sizing_Guide are comparing it to the libraries listed below
Sorting:
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- ☆56Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last month
- vLLM Router☆54Updated last year
- The driver for LMCache core to run in vLLM☆59Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 3 weeks ago
- Self-host LLMs with vLLM and BentoML☆162Updated last month
- ☆60Updated last year
- Inference server benchmarking tool☆132Updated 2 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆174Updated last week
- ☆128Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆354Updated this week
- ☆31Updated 8 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆765Updated this week
- ☆273Updated last week
- LLM Serving Performance Evaluation Harness☆82Updated 10 months ago
- Benchmarking the serving capabilities of vLLM☆58Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- vLLM performance dashboard☆39Updated last year
- ☆322Updated this week
- Comparison of Language Model Inference Engines☆238Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- ☆67Updated 9 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆16Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- Benchmarking suite for popular AI APIs☆88Updated 10 months ago