nlzy / triton-gfx906Links
triton for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60
☆40Updated last month
Alternatives and similar repositories for triton-gfx906
Users that are interested in triton-gfx906 are comparing it to the libraries listed below
Sorting:
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆365Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆1,553Updated this week
- ML software (llama.cpp, ComfyUI, vLLM) builds for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆103Updated 2 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆622Updated last week
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆65Updated 8 months ago
- llama.cpp-gfx906☆85Updated 2 weeks ago
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,260Updated last week
- KTransformers 一键部署脚本☆57Updated 9 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆236Updated 3 weeks ago
- LM inference server implementation based on *.cpp.☆294Updated 2 months ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆665Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆777Updated last week
- Implements harmful/harmless refusal removal using pure HF Transformers☆1,445Updated 2 months ago
- Docs for GGUF quantization (unofficial)☆360Updated 6 months ago
- CI scripts designed to build a Pascal-compatible version of vLLM.☆12Updated last year
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆737Updated 4 months ago
- Build AI agents for your PC☆894Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆283Updated last week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆167Updated last week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆163Updated 5 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,119Updated last week
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆2,042Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆217Updated 2 months ago
- ☆809Updated 2 weeks ago
- Low-bit LLM inference on CPU/NPU with lookup table☆915Updated 7 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated this week
- Profiling Google Gemma 3n Model Using PyTorch Profiler☆16Updated 6 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆751Updated last week
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆989Updated this week
- A fork of vLLM enabling Pascal architecture GPUs☆31Updated 11 months ago