ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆84Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ☆38Updated this week
- AI Tensor Engine for ROCm☆208Updated this week
- Fast and memory-efficient exact attention☆174Updated this week
- Development repository for the Triton language and compiler☆125Updated this week
- ☆90Updated 6 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- Ahead of Time (AOT) Triton Math Library☆67Updated last week
- Ongoing research training transformer models at scale☆23Updated 2 weeks ago
- ☆97Updated 9 months ago
- ☆212Updated 11 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 9 months ago
- ☆19Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 8 months ago
- A lightweight design for computation-communication overlap.☆143Updated last week
- Applied AI experiments and examples for PyTorch☆277Updated 3 weeks ago
- Experimental projects related to TensorRT☆105Updated last week
- Fast and memory-efficient exact attention☆76Updated this week
- ☆69Updated last week
- ☆81Updated 7 months ago
- ☆20Updated 3 months ago
- ☆29Updated 4 months ago
- ☆117Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆128Updated 2 months ago
- ☆139Updated last year
- Perplexity GPU Kernels☆375Updated 2 weeks ago
- extensible collectives library in triton☆86Updated 2 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆22Updated 2 weeks ago
- RCCL Performance Benchmark Tests☆68Updated last month
- Ultra and Unified CCL☆165Updated this week