ROCm / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆113Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- Development repository for the Triton language and compiler☆140Updated this week
- ☆59Updated this week
- Fast and memory-efficient exact attention☆213Updated this week
- AI Tensor Engine for ROCm☆344Updated last week
- OpenAI Triton backend for Intel® GPUs☆225Updated this week
- Fast and memory-efficient exact attention☆110Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated this week
- ☆34Updated 11 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Ahead of Time (AOT) Triton Math Library☆88Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆515Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated last week
- A lightweight design for computation-communication overlap.☆213Updated last week
- Ongoing research training transformer models at scale☆35Updated last week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated last week
- ☆102Updated last year
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆86Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Updated 5 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- ☆171Updated last week
- ☆71Updated 10 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated this week
- ☆258Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- ☆158Updated last year