Said-Akbar / triton-gcn5Links
Triton for AMD MI25/50/60. Development repository for the Triton language and compiler
☆27Updated 4 months ago
Alternatives and similar repositories for triton-gcn5
Users that are interested in triton-gcn5 are comparing it to the libraries listed below
Sorting:
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated 2 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆111Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- ☆43Updated this week
- run DeepSeek-R1 GGUFs on KTransformers☆242Updated 4 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆111Updated this week
- NVIDIA Linux open GPU with P2P support☆25Updated last month
- Fast and memory-efficient exact attention☆177Updated this week
- ☆14Updated this week
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆312Updated last year
- A lightweight cluster manager that turns your small fleet of nodes into one powerful computer, using Docker for environment consistency w…☆52Updated 3 weeks ago
- automatically quant GGUF models☆187Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 5 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆234Updated this week
- A converter and basic tester for rwkv onnx☆42Updated last year
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆571Updated last month
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆67Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆93Updated 3 weeks ago
- LM inference server implementation based on *.cpp.☆236Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ☆80Updated this week
- ☆134Updated 3 weeks ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆436Updated this week
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆549Updated 5 months ago
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆115Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- RWKV models and examples powered by candle.☆18Updated 4 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆47Updated 2 weeks ago