ROCm / AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
☆11Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for AITemplate
- Fast and memory-efficient exact attention☆139Updated this week
- Development repository for the Triton language and compiler☆93Updated this week
- ☆16Updated last week
- Ahead of Time (AOT) Triton Math Library☆41Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆45Updated this week
- OpenAI Triton backend for Intel® GPUs☆143Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆313Updated this week
- AMD's graph optimization engine.☆186Updated this week
- ☆30Updated this week
- RDC☆23Updated this week
- A collection of examples for the ROCm software stack☆167Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆23Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- ☆169Updated 4 months ago
- Stretching GPU performance for GEMMs and tensor contractions.☆223Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆35Updated 6 months ago
- ☆59Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆201Updated 2 years ago
- collection of benchmarks to measure basic GPU capabilities☆265Updated 5 months ago
- ☆13Updated this week
- ☆48Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆271Updated this week
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆63Updated this week
- rocWMMA☆92Updated this week
- 8-bit CUDA functions for PyTorch☆38Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆57Updated 2 months ago
- ROCm Communication Collectives Library (RCCL)☆270Updated this week
- CUDA GPU Benchmark☆17Updated 4 months ago
- Next generation BLAS implementation for ROCm platform☆346Updated this week
- ☆17Updated this week