intel / intel-extension-for-openxlaLinks
☆50Updated 2 months ago
Alternatives and similar repositories for intel-extension-for-openxla
Users that are interested in intel-extension-for-openxla are comparing it to the libraries listed below
Sorting:
- ☆53Updated this week
- Ahead of Time (AOT) Triton Math Library☆75Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆349Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated last month
- OpenAI Triton backend for Intel® GPUs☆204Updated this week
- oneCCL Bindings for Pytorch*☆100Updated 2 weeks ago
- MLIR-based partitioning system☆123Updated this week
- Stores documents and resources used by the OpenXLA developer community☆127Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆44Updated last week
- ☆50Updated last year
- High-Performance SGEMM on CUDA devices☆97Updated 7 months ago
- ☆28Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆52Updated this week
- Development repository for the Triton language and compiler☆127Updated this week
- Backward compatible ML compute opset inspired by HLO/MHLO☆525Updated this week
- ☆62Updated 8 months ago
- extensible collectives library in triton☆88Updated 4 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- ☆122Updated last week
- ☆41Updated this week
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆80Updated 5 months ago
- A CUTLASS implementation using SYCL☆35Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆247Updated last week
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆40Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- Samples demonstrating how to use the Compute Sanitizer Tools and Public API☆85Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆95Updated last month
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆146Updated last week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆60Updated 2 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆291Updated 2 weeks ago