ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆102Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ☆44Updated this week
- Development repository for the Triton language and compiler☆131Updated this week
- Fast and memory-efficient exact attention☆189Updated this week
- AI Tensor Engine for ROCm☆279Updated this week
- OpenAI Triton backend for Intel® GPUs☆208Updated this week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆33Updated 3 weeks ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆265Updated last month
- Fast and memory-efficient exact attention☆93Updated 2 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- ☆88Updated this week
- ☆119Updated 8 months ago
- Ahead of Time (AOT) Triton Math Library☆76Updated last week
- RCCL Performance Benchmark Tests☆76Updated last week
- ☆74Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- ☆90Updated 10 months ago
- ☆98Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆139Updated last month
- A lightweight design for computation-communication overlap.☆171Updated last week
- Ongoing research training transformer models at scale☆28Updated this week
- ☆199Updated 4 months ago
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆465Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated last year
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- ☆233Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆676Updated last month
- ☆27Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆161Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆113Updated last year
- An experimental CPU backend for Triton☆153Updated 3 months ago