HabanaAI / vllm-fork
A high-throughput and memory-efficient inference and serving engine for LLMs
โ58Updated this week
Alternatives and similar repositories for vllm-fork:
Users that are interested in vllm-fork are comparing it to the libraries listed below
- Large Language Model Text Generation Inference on Habana Gaudiโ33Updated this week
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ116Updated last year
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ175Updated this week
- A low-latency & high-throughput serving engine for LLMsโ319Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsityโ201Updated last year
- Easy and Efficient Quantization for Transformersโ192Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttentionโ310Updated this week
- Applied AI experiments and examples for PyTorchโ244Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ160Updated 3 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingโ298Updated 8 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ189Updated this week
- โ116Updated last year
- โ237Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ121Updated 3 weeks ago
- A tool for bandwidth measurements on NVIDIA GPUs.โ386Updated last month
- MSCCL++: A GPU-driven communication stack for scalable AI applicationsโ309Updated this week
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ62Updated 2 weeks ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ289Updated last month
- โ54Updated 6 months ago
- Efficient and easy multi-instance LLM servingโ332Updated this week
- โ190Updated 8 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ239Updated 4 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inferenceโ400Updated 2 weeks ago
- LLM Serving Performance Evaluation Harnessโ70Updated 3 weeks ago
- โ61Updated 3 weeks ago
- โ73Updated 4 months ago
- Microsoft Collective Communication Libraryโ60Updated 3 months ago
- Fast low-bit matmul kernels in Tritonโ263Updated this week
- โ179Updated 5 months ago
- โ88Updated 4 months ago