ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆108Updated last week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- Development repository for the Triton language and compiler☆136Updated this week
- ☆51Updated this week
- Fast and memory-efficient exact attention☆198Updated 3 weeks ago
- AI Tensor Engine for ROCm☆296Updated this week
- OpenAI Triton backend for Intel® GPUs☆219Updated this week
- RCCL Performance Benchmark Tests☆78Updated last week
- Fast and memory-efficient exact attention☆97Updated this week
- ☆93Updated last year
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆267Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆268Updated 3 months ago
- Ahead of Time (AOT) Triton Math Library☆81Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆145Updated 2 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 2 months ago
- An experimental CPU backend for Triton☆157Updated last week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆484Updated this week
- ☆101Updated last year
- Ongoing research training transformer models at scale☆31Updated this week
- ☆146Updated 10 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆84Updated this week
- ☆168Updated last week
- ☆122Updated last week
- llama INT4 cuda inference with AWQ☆55Updated 9 months ago
- Bandwidth test for ROCm☆69Updated last week
- A lightweight design for computation-communication overlap.☆183Updated last month
- ☆33Updated 9 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆101Updated this week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆75Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆120Updated last year
- ☆243Updated last year