openxla / stablehlo
Backward compatible ML compute opset inspired by HLO/MHLO
☆449Updated this week
Alternatives and similar repositories for stablehlo:
Users that are interested in stablehlo are comparing it to the libraries listed below
- Stores documents and resources used by the OpenXLA developer community☆117Updated 7 months ago
- ☆406Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆818Updated this week
- Shared Middle-Layer for Triton Compilation☆228Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,447Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆683Updated this week
- Python interface for MLIR - the Multi-Level Intermediate Representation☆244Updated 3 months ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆310Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆696Updated 2 years ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated this week
- ☆159Updated 8 months ago
- Experimental projects related to TensorRT☆89Updated this week
- A library to analyze PyTorch traces.☆340Updated this week
- An experimental CPU backend for Triton☆94Updated this week
- OpenAI Triton backend for Intel® GPUs☆165Updated this week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆128Updated last year
- A model compilation solution for various hardware☆406Updated last week
- MLIR-based partitioning system☆67Updated this week
- ☆194Updated last year
- A performant and modular runtime for TensorFlow☆759Updated last week
- MLIR For Beginners tutorial☆914Updated 3 weeks ago
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆355Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆214Updated 3 years ago
- ☆49Updated 11 months ago
- ☆233Updated 2 years ago
- CUDA Kernel Benchmarking Library☆578Updated 3 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,264Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆381Updated last month
- Fast CUDA matrix multiplication from scratch☆648Updated last year
- Fastest kernels written from scratch☆184Updated 2 weeks ago