A high-throughput and memory-efficient inference and serving engine for LLMs
☆85Apr 13, 2026Updated this week
Alternatives and similar repositories for vllm-fork
Users that are interested in vllm-fork are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆18Mar 25, 2026Updated 2 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆14Jan 8, 2026Updated 3 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Mar 20, 2025Updated last year
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆209Apr 3, 2026Updated last week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆171Jan 8, 2026Updated 3 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆42Feb 3, 2025Updated last year
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆25Apr 15, 2025Updated 11 months ago
- PM Workshop China☆10Apr 11, 2019Updated 7 years ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Mar 31, 2026Updated 2 weeks ago
- GenAI components at micro-service level; GenAI service composer to create mega-service☆195Apr 7, 2026Updated last week
- ☆24Oct 9, 2025Updated 6 months ago
- Setup and Installation Instructions for Habana binaries, docker image creation☆28Jan 8, 2026Updated 3 months ago
- ☆83Updated this week
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆18Dec 19, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated 2 years ago
- Distributed KV cache scheduling & offloading libraries☆126Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆65Jun 30, 2025Updated 9 months ago
- Cloud Native Benchmarking of Foundation Models☆45Jul 31, 2025Updated 8 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆284Apr 7, 2026Updated last week
- OpenVINO LLM Benchmark☆11Dec 7, 2023Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆728Updated this week
- Intel Graphics System Firmware Update Library (IGSC FUL) is a pure C low level library that exposes a required API to perform a firmware …☆77Jan 15, 2026Updated 2 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- AI-agent应用,基于GPT、langchain、function calling、Stable diffusion等的AI儿童绘本生成☆25Oct 11, 2023Updated 2 years ago
- ☆19Jul 24, 2025Updated 8 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆38Aug 29, 2025Updated 7 months ago
- ☆30Aug 31, 2022Updated 3 years ago
- ☆248Mar 23, 2026Updated 3 weeks ago
- ☆17Feb 3, 2026Updated 2 months ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆55Jul 16, 2025Updated 8 months ago
- vLLM performance dashboard☆44Apr 26, 2024Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆222Apr 3, 2026Updated last week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Benchmark Suite Invocation Scripting☆11Mar 16, 2022Updated 4 years ago
- The kernel module management operator builds, signs and loads kernel modules on OpenShift.☆32Updated this week
- ☆160Mar 12, 2026Updated last month
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆561Apr 2, 2026Updated last week
- SPDK fork of nvme-cli. No longer supported - use standard nvme-cli with SPDK nvme CUSE instead. See https://spdk.io/doc/nvme.html#nvme_…☆15Apr 10, 2024Updated 2 years ago
- High-Speed Stateful Packet Processor for Programmable Switches☆14Dec 18, 2022Updated 3 years ago
- Experimental projects related to TensorRT☆122Updated this week