HabanaAI / vllm-forkLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
โ75Updated this week
Alternatives and similar repositories for vllm-fork
Users that are interested in vllm-fork are comparing it to the libraries listed below
Sorting:
- Large Language Model Text Generation Inference on Habana Gaudiโ33Updated 2 months ago
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ186Updated this week
- A low-latency & high-throughput serving engine for LLMsโ370Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttentionโ384Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsityโ211Updated last year
- Perplexity GPU Kernelsโ324Updated 2 weeks ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ118Updated last year
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 2 months ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ161Updated 2 weeks ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ196Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.โ13Updated 2 weeks ago
- Applied AI experiments and examples for PyTorchโ271Updated this week
- โ118Updated last year
- NVIDIA Inference Xfer Library (NIXL)โ365Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the โฆโ169Updated last week
- โ260Updated 2 weeks ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationโ355Updated 9 months ago
- LLM Serving Performance Evaluation Harnessโ78Updated 3 months ago
- โ99Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ251Updated 7 months ago
- โ53Updated 8 months ago
- โ79Updated 6 months ago
- โ86Updated 5 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingโ208Updated 9 months ago
- PyTorch distributed training acceleration frameworkโ49Updated 3 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applicationsโ365Updated this week
- โ73Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ127Updated last month
- โ49Updated 2 months ago
- โ193Updated 3 weeks ago