nlzy / vllm-gfx906Links
vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60
☆280Updated this week
Alternatives and similar repositories for vllm-gfx906
Users that are interested in vllm-gfx906 are comparing it to the libraries listed below
Sorting:
- triton for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆28Updated 2 weeks ago
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆64Updated 5 months ago
- Triton for AMD MI25/50/60. Development repository for the Triton language and compiler☆32Updated 3 weeks ago
- KTransformers 一键部署脚本☆51Updated 5 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆632Updated 2 weeks ago
- LM inference server implementation based on *.cpp.☆279Updated last month
- run DeepSeek-R1 GGUFs on KTransformers☆252Updated 7 months ago
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆132Updated last month
- Implements harmful/harmless refusal removal using pure HF Transformers☆1,168Updated last year
- 一套基于Vllm的显存内存混合模式大模型部署工具(图形界面),VRAMandDRAM模式虽然慢一点,但是解决了超大模型在普通家用计算机上的部署问题。☆86Updated 5 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆438Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆208Updated last month
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- This project simplifies the installation process of likelovewant's library, making it easier for users to manage and update their AMD GPU…☆259Updated 2 weeks ago
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆582Updated 3 weeks ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆513Updated this week
- triton3.2.0添加mi25/mi50/mi60支持☆14Updated 5 months ago
- Get up and running with Llama 3, Mistral, Gemma, and other large language models.by adding more amd gpu support.☆1,389Updated last week
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,655Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆716Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆106Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆639Updated last week
- RAG SYSTEM FOR RWKV☆51Updated 10 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆653Updated last month
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,061Updated this week
- Simple, scalable AI model deployment on GPU clusters☆3,799Updated last week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆58Updated this week
- A manual for helping using tesla p40 gpu☆132Updated 10 months ago