nlzy / triton-gfx906Links
triton for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60
☆38Updated 2 months ago
Alternatives and similar repositories for triton-gfx906
Users that are interested in triton-gfx906 are comparing it to the libraries listed below
Sorting:
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆338Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,387Updated this week
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆65Updated 7 months ago
- Triton for AMD MI25/50/60. Development repository for the Triton language and compiler☆32Updated 2 weeks ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆588Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆518Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,025Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- LM inference server implementation based on *.cpp.☆293Updated 2 weeks ago
- run DeepSeek-R1 GGUFs on KTransformers☆258Updated 9 months ago
- ML software (llama.cpp, ComfyUI, vLLM) builds for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆74Updated 3 weeks ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆219Updated 3 months ago
- Implements harmful/harmless refusal removal using pure HF Transformers☆1,352Updated 2 weeks ago
- KTransformers 一键部署脚本☆55Updated 7 months ago
- ☆417Updated 8 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆924Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 2 weeks ago
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,848Updated last year
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆155Updated 3 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- GPU cluster manager for optimized AI model deployment☆4,147Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆753Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated this week
- CI scripts designed to build a Pascal-compatible version of vLLM.☆12Updated last year
- Docs for GGUF quantization (unofficial)☆330Updated 4 months ago
- VS Code extension for LLM-assisted code/text completion☆1,082Updated 3 weeks ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆708Updated 3 weeks ago
- 8-bit CUDA functions for PyTorch☆69Updated 2 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,097Updated this week