ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆79Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ☆36Updated this week
- Fast and memory-efficient exact attention☆173Updated this week
- Development repository for the Triton language and compiler☆122Updated this week
- OpenAI Triton backend for Intel® GPUs☆189Updated this week
- AI Tensor Engine for ROCm☆201Updated this week
- Ahead of Time (AOT) Triton Math Library☆64Updated last week
- ☆24Updated last month
- Ongoing research training transformer models at scale☆22Updated this week
- ☆96Updated 8 months ago
- ☆46Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆11Updated 11 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 3 months ago
- ☆88Updated 5 months ago
- ☆110Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- A lightweight design for computation-communication overlap.☆136Updated this week
- RCCL Performance Benchmark Tests☆67Updated 2 weeks ago
- ☆208Updated 10 months ago
- ☆61Updated 5 months ago
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆97Updated this week
- A CUTLASS implementation using SYCL☆23Updated last week
- ☆63Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆251Updated 7 months ago
- An experimental CPU backend for Triton☆119Updated this week
- Example of using pytorch's open device registration API☆30Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆81Updated 3 weeks ago
- An extension library of WMMA API (Tensor Core API)☆97Updated 10 months ago
- Bandwidth test for ROCm☆56Updated 2 weeks ago
- ☆29Updated 4 months ago