ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆110Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- Development repository for the Triton language and compiler☆137Updated this week
- ☆51Updated this week
- Fast and memory-efficient exact attention☆201Updated last month
- AI Tensor Engine for ROCm☆306Updated this week
- OpenAI Triton backend for Intel® GPUs☆221Updated this week
- Fast and memory-efficient exact attention☆102Updated last week
- RCCL Performance Benchmark Tests☆79Updated last week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆490Updated this week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 3 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆268Updated 3 months ago
- Ahead of Time (AOT) Triton Math Library☆84Updated 2 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- ☆170Updated 2 weeks ago
- ☆128Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 5 months ago
- ☆27Updated 2 months ago
- A tool for bandwidth measurements on NVIDIA GPUs.☆571Updated 7 months ago
- AMD's graph optimization engine.☆266Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo