HabanaAI / vllm-fork
A high-throughput and memory-efficient inference and serving engine for LLMs
โ50Updated this week
Alternatives and similar repositories for vllm-fork:
Users that are interested in vllm-fork are comparing it to the libraries listed below
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ114Updated 10 months ago
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ166Updated this week
- Materials for learning SGLangโ191Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsityโ195Updated last year
- Large Language Model Text Generation Inference on Habana Gaudiโ31Updated this week
- LLM Serving Performance Evaluation Harnessโ66Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttentionโ274Updated this week
- Easy and Efficient Quantization for Transformersโ192Updated last month
- Applied AI experiments and examples for PyTorchโ216Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ232Updated 3 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ58Updated last month
- A low-latency & high-throughput serving engine for LLMsโ301Updated 4 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applicationsโ292Updated this week
- โ64Updated 2 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformersโ204Updated 5 months ago
- โ114Updated 10 months ago
- โ180Updated 6 months ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ159Updated last week
- โ58Updated 8 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ186Updated last week
- A Python library transfers PyTorch tensors between CPU and NVMeโ102Updated 2 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ110Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ57Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paperโ64Updated 6 years ago
- โ218Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inferenceโ378Updated 2 months ago
- OpenAI Triton backend for Intelยฎ GPUsโ157Updated this week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationโ327Updated 5 months ago
- โ52Updated 4 months ago