ROCm / AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
☆11Updated 10 months ago
Alternatives and similar repositories for AITemplate
Users that are interested in AITemplate are comparing it to the libraries listed below
Sorting:
- Development repository for the Triton language and compiler☆122Updated this week
- Fast and memory-efficient exact attention☆174Updated this week
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- AI Tensor Engine for ROCm☆195Updated this week
- ☆203Updated 10 months ago
- ☆34Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆396Updated this week
- ☆24Updated last week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆42Updated 8 months ago
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆95Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 6 months ago
- ☆20Updated last month
- OpenAI Triton backend for Intel® GPUs☆185Updated this week
- ☆109Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 2 months ago
- Experimental projects related to TensorRT☆101Updated this week
- RCCL Performance Benchmark Tests☆64Updated this week
- rocWMMA☆111Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆76Updated 4 months ago
- oneCCL Bindings for Pytorch*☆97Updated 3 weeks ago
- 8-bit CUDA functions for PyTorch☆53Updated this week
- An extension library of WMMA API (Tensor Core API)☆96Updated 10 months ago
- ☆94Updated 8 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆117Updated last year
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆22Updated this week
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆38Updated 9 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆61Updated 8 months ago
- ☆143Updated this week