comaniac / epoi
Benchmark PyTorch Custom Operators
☆13Updated last year
Related projects ⓘ
Alternatives and complementary repositories for epoi
- ☆23Updated 9 months ago
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- DietCode Code Release☆61Updated 2 years ago
- ☆44Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆26Updated 5 years ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- An external memory allocator example for PyTorch.☆13Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆131Updated last year
- ☆17Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆49Updated 3 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆114Updated 2 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆29Updated 3 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆17Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- ☆18Updated last month
- ☆38Updated 4 years ago
- ☆14Updated 3 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆13Updated 3 years ago
- play gemm with tvm☆84Updated last year
- ThrillerFlow is a Dataflow Analysis and Codegen Framework written in Rust.☆10Updated last month
- ☆35Updated 2 years ago
- ☆89Updated 2 years ago
- The code for our paper "Neural Architecture Search as Program Transformation Exploration"☆18Updated 3 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆102Updated 2 years ago
- Optimize tensor program fast with Felix, a gradient descent autotuner.☆19Updated 6 months ago
- ☆8Updated last year
- ☆67Updated last year
- An Attention Superoptimizer☆20Updated 6 months ago