octoml / relaxLinks
A fork of tvm/unity
☆14Updated 2 years ago
Alternatives and similar repositories for relax
Users that are interested in relax are comparing it to the libraries listed below
Sorting:
- ☆69Updated 2 years ago
- The quantitative performance comparison among DL compilers on CNN models.☆74Updated 4 years ago
- Benchmark scripts for TVM☆75Updated 3 years ago
- ☆9Updated 2 years ago
- MLIR-based partitioning system☆117Updated this week
- A home for the final text of all TVM RFCs.☆105Updated 10 months ago
- ☆24Updated last year
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- tophub autotvm log collections☆70Updated 2 years ago
- ☆13Updated 5 years ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- Static analysis framework for analyzing programs written in TVM's Relay IR.☆28Updated 5 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago
- ☆40Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 11 months ago
- Play with MLIR right in your browser☆135Updated 2 years ago
- ☆50Updated last year
- ☆36Updated 3 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 4 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆137Updated 2 years ago
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 8 months ago
- play gemm with tvm☆91Updated 2 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- ☆162Updated this week
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated last year
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated last year
- llama INT4 cuda inference with AWQ☆54Updated 6 months ago
- Conversions to MLIR EmitC☆131Updated 8 months ago
- ☆241Updated 2 weeks ago