A high-throughput and memory-efficient inference and serving engine for LLMs
☆117Mar 19, 2026Updated this week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Jun 24, 2024Updated last year
- ☆72Updated this week
- Development repository for the Triton language and compiler☆143Updated this week
- A survey of manufacturer-provided DRAM operating parameters and timings as specified by DRAM chip datasheets from between 1970 and 2021. …☆11May 4, 2022Updated 3 years ago
- ☆30Mar 2, 2026Updated 3 weeks ago
- ☆105Sep 9, 2024Updated last year
- ☆23Mar 16, 2026Updated last week
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆88Mar 5, 2026Updated 2 weeks ago
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆252Updated this week
- super repo for rocm libraries☆280Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆25Updated this week
- A system validation and diagnostics tool for monitoring, stress testing, detecting, and troubleshooting issues impacting AMD GPUs in high…☆97Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror☆525Updated this week
- ☆105Mar 12, 2026Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- AMD's graph optimization engine.☆284Updated this week
- AI Tensor Engine for ROCm☆385Updated this week
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆18Feb 9, 2026Updated last month
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆84Feb 11, 2026Updated last month
- vLLM performance dashboard☆43Apr 26, 2024Updated last year
- Libraries integrating migraphx with pytorch☆16Dec 27, 2025Updated 2 months ago
- HIPIFY: Convert CUDA to Portable C++ Code☆678Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆413Updated this week
- A forked version of flux-fast that makes flux-fast even faster with cache-dit, 3.3x speedup on NVIDIA L20.☆24Jul 18, 2025Updated 8 months ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆145Updated this week
- CMake modules used within the ROCm libraries☆73Mar 13, 2026Updated last week
- High Performance Linpack for Next-Generation AMD HPC Accelerators☆68Dec 10, 2025Updated 3 months ago
- Nsight Compute In Docker☆13Dec 21, 2023Updated 2 years ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆38Aug 29, 2025Updated 6 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆389Apr 13, 2025Updated 11 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆34Feb 26, 2026Updated 3 weeks ago
- AMD’s C++ library for accelerating tensor primitives☆49Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Sep 18, 2025Updated 6 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year