Said-Akbar / triton-gcn5Links
Triton for AMD MI25/50/60. Development repository for the Triton language and compiler
☆32Updated 3 weeks ago
Alternatives and similar repositories for triton-gcn5
Users that are interested in triton-gcn5 are comparing it to the libraries listed below
Sorting:
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆65Updated 8 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆355Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆613Updated this week
- ☆63Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆266Updated last month
- NVIDIA Linux open GPU with P2P support☆103Updated last month
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆690Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆727Updated 3 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆1,455Updated this week
- 8-bit CUDA functions for PyTorch☆69Updated 3 months ago
- Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all Direc…☆300Updated 2 years ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆154Updated this week
- automatically quant GGUF models☆219Updated 2 weeks ago
- llama.cpp-gfx906☆75Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆113Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆50Updated last year
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆725Updated last week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- ML software (llama.cpp, ComfyUI, vLLM) builds for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆90Updated last month
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆118Updated last week
- run DeepSeek-R1 GGUFs on KTransformers☆259Updated 10 months ago
- Fast and memory-efficient exact attention☆206Updated 2 weeks ago
- Make abliterated models with transformers, easy and fast☆111Updated 3 weeks ago
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆595Updated 2 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆166Updated 3 weeks ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆75Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆591Updated last week
- This project is established for real-time training of the RWKV model.☆50Updated last year