ROCm / AITemplateLinks
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
☆12Updated last year
Alternatives and similar repositories for AITemplate
Users that are interested in AITemplate are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆200Updated last month
- Development repository for the Triton language and compiler☆137Updated last week
- AI Tensor Engine for ROCm☆301Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- ☆27Updated last month
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆483Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆108Updated this week
- ☆51Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆23Updated last week
- OpenAI Triton backend for Intel® GPUs☆219Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆563Updated this week
- monorepo for rocm libraries☆177Updated this week
- Ahead of Time (AOT) Triton Math Library☆83Updated last week
- collection of benchmarks to measure basic GPU capabilities☆456Updated 3 weeks ago
- 8-bit CUDA functions for PyTorch☆68Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 2 months ago
- ROCm Communication Collectives Library (RCCL)☆399Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆111Updated this week
- AMD's graph optimization engine.☆266Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆709Updated 3 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆137Updated this week
- A tool for generating information about the matrix multiplication instructions in AMD Radeon™ and AMD Instinct™ accelerators☆120Updated this week
- ☆61Updated this week
- ☆243Updated last year
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆51Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- ☆154Updated 6 months ago
- Modular RDMA Interface☆59Updated this week