Said-Akbar / triton-gcn5Links
Triton for AMD MI25/50/60. Development repository for the Triton language and compiler
☆32Updated last week
Alternatives and similar repositories for triton-gcn5
Users that are interested in triton-gcn5 are comparing it to the libraries listed below
Sorting:
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆64Updated 4 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆257Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆398Updated this week
- NVIDIA Linux open GPU with P2P support☆54Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆102Updated this week
- ☆54Updated this week
- LM inference server implementation based on *.cpp.☆273Updated last month
- ☆231Updated 2 years ago
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆613Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆165Updated 3 weeks ago
- automatically quant GGUF models☆202Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆499Updated this week
- run DeepSeek-R1 GGUFs on KTransformers☆251Updated 6 months ago
- Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all Direc…☆301Updated last year
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆210Updated last week
- Fast and memory-efficient exact attention☆189Updated this week
- A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui☆55Updated 2 years ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowes…☆45Updated this week
- Make abliterated models with transformers, easy and fast☆87Updated 5 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆692Updated 3 weeks ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆160Updated 6 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆314Updated last year
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆139Updated last week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 7 months ago
- 8-bit CUDA functions for PyTorch☆62Updated this week
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆125Updated 2 years ago
- stable-diffusion.cpp bindings for python☆65Updated this week
- ☆395Updated 5 months ago