ROCm / pyrsmiLinks
python package of rocm-smi-lib
☆24Updated 3 months ago
Alternatives and similar repositories for pyrsmi
Users that are interested in pyrsmi are comparing it to the libraries listed below
Sorting:
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆40Updated 2 months ago
- ☆72Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated this week
- extensible collectives library in triton☆89Updated 6 months ago
- Ahead of Time (AOT) Triton Math Library☆79Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆171Updated this week
- ☆48Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆64Updated 6 months ago
- ☆21Updated 7 months ago
- ☆92Updated 11 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆381Updated 2 weeks ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆114Updated 2 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- A bunch of kernels that might make stuff slower 😉☆61Updated last week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆68Updated 2 weeks ago
- TORCH_LOGS parser for PT2☆62Updated 3 weeks ago
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- Parallel framework for training and fine-tuning deep neural networks☆65Updated 7 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆98Updated 3 months ago
- ☆102Updated this week
- Collection of kernels written in Triton language☆156Updated 6 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆83Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆83Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 3 months ago
- MLPerf™ logging library☆37Updated this week
- A Python library transfers PyTorch tensors between CPU and NVMe☆120Updated 10 months ago