octoml / relax
A fork of tvm/unity
☆15Updated last year
Alternatives and similar repositories for relax:
Users that are interested in relax are comparing it to the libraries listed below
- ☆69Updated last year
- The quantitative performance comparison among DL compilers on CNN models.☆75Updated 4 years ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆87Updated 10 months ago
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated last year
- ☆23Updated 10 months ago
- A home for the final text of all TVM RFCs.☆101Updated 3 months ago
- ☆36Updated 2 years ago
- An MLIR frontend for tensor expressions☆24Updated 4 years ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆37Updated 8 months ago
- ☆21Updated last week
- play gemm with tvm☆85Updated last year
- ☆9Updated last year
- MLIR-based partitioning system☆56Updated this week
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆13Updated 3 years ago
- GPTQ inference TVM kernel☆38Updated 8 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated 5 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆99Updated 4 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆103Updated last month
- An IR for efficiently simulating distributed ML computation.☆25Updated last year
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 6 years ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆62Updated 6 years ago
- ☆36Updated this week
- tophub autotvm log collections☆70Updated 2 years ago
- llama INT4 cuda inference with AWQ☆49Updated 6 months ago
- ☆21Updated last year
- An experimental CPU backend for Triton☆75Updated this week