ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆92Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ☆42Updated this week
- Fast and memory-efficient exact attention☆183Updated 2 weeks ago
- Development repository for the Triton language and compiler☆127Updated this week
- AI Tensor Engine for ROCm☆260Updated this week
- OpenAI Triton backend for Intel® GPUs☆205Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆261Updated last month
- ☆111Updated 8 months ago
- ☆98Updated 11 months ago
- ☆84Updated this week
- Fast and memory-efficient exact attention☆91Updated this week
- RCCL Performance Benchmark Tests☆73Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆32Updated 5 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆260Updated last week
- Ahead of Time (AOT) Triton Math Library☆75Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆449Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆217Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆105Updated 3 months ago
- A lightweight design for computation-communication overlap.☆160Updated this week
- ☆74Updated 5 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 11 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆138Updated last week
- An experimental CPU backend for Triton☆145Updated 2 months ago
- ☆128Updated 8 months ago
- llama INT4 cuda inference with AWQ☆54Updated 7 months ago
- ☆88Updated 9 months ago
- AMD's graph optimization engine.☆240Updated this week
- High performance Transformer implementation in C++.☆129Updated 7 months ago
- ☆163Updated this week
- ☆229Updated last year