Said-Akbar / vllm-rocmLinks
FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs
☆65Updated 8 months ago
Alternatives and similar repositories for vllm-rocm
Users that are interested in vllm-rocm are comparing it to the libraries listed below
Sorting:
- Triton for AMD MI25/50/60. Development repository for the Triton language and compiler☆32Updated 3 weeks ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆359Updated last week
- ML software (llama.cpp, ComfyUI, vLLM) builds for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆92Updated last month
- LM inference server implementation based on *.cpp.☆295Updated last month
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆159Updated this week
- run DeepSeek-R1 GGUFs on KTransformers☆259Updated 10 months ago
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆160Updated 4 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆187Updated 2 weeks ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆76Updated this week
- llama.cpp-gfx906☆82Updated this week
- Download models from the Ollama library, without Ollama☆119Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆271Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆223Updated this week
- NVIDIA Linux open GPU with P2P support☆103Updated last month
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆270Updated last week
- LLM voice chat project by Connect ChatTTS with Local Ollama, 连接本地部署的 Ollama 和 ChatTTS,实现和LLM的语音对话☆65Updated last year
- ☆127Updated last year
- A manual for helping using tesla p40 gpu☆139Updated last year
- LLM inference in C/C++☆104Updated 3 weeks ago
- triton for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆38Updated last month
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆120Updated last week
- ☆108Updated 4 months ago
- xllamacpp - a Python wrapper of llama.cpp☆68Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆591Updated last week
- ☆94Updated 6 months ago
- GPU Power and Performance Manager☆64Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆86Updated last week
- LLM inference in C/C++☆21Updated 9 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year