HabanaAI / vllm-forkLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆85Updated last week
Alternatives and similar repositories for vllm-fork
Users that are interested in vllm-fork are comparing it to the libraries listed below
Sorting:
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 8 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆239Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 5 months ago
- A low-latency & high-throughput serving engine for LLMs☆450Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆446Updated 6 months ago
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆169Updated 2 months ago
- ☆123Updated 3 weeks ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- NVIDIA NCCL Tests for Distributed Training☆126Updated 3 weeks ago
- ☆58Updated last year
- oneCCL Bindings for Pytorch* (deprecated)☆103Updated last month
- ☆17Updated 3 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 3 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆320Updated 2 months ago
- NVIDIA Inference Xfer Library (NIXL)☆740Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆466Updated 7 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated last week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆224Updated 2 years ago
- Microsoft Collective Communication Library☆66Updated last year
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- ROCm Communication Collectives Library (RCCL)☆403Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆201Updated this week
- Applied AI experiments and examples for PyTorch☆308Updated 3 months ago
- ☆148Updated 11 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- Perplexity GPU Kernels☆534Updated 3 weeks ago
- A tool for bandwidth measurements on NVIDIA GPUs.☆575Updated 7 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Zero Bubble Pipeline Parallelism☆437Updated 6 months ago
- ☆24Updated last month