Lunderberg / tvmcon-2021Links
Slides from 2021-12-15 talk, "TVM Developer Bootcamp – Writing Hardware Backends"
☆10Updated 3 years ago
Alternatives and similar repositories for tvmcon-2021
Users that are interested in tvmcon-2021 are comparing it to the libraries listed below
Sorting:
- ☆43Updated last year
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- Visualize TVM Relay program graph☆12Updated 5 years ago
- Benchmark PyTorch Custom Operators☆14Updated last year
- Benchmark scripts for TVM☆74Updated 3 years ago
- ☆23Updated 7 months ago
- The quantitative performance comparison among DL compilers on CNN models.☆74Updated 4 years ago
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆14Updated 4 years ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆18Updated 2 years ago
- DietCode Code Release☆64Updated 2 years ago
- ☆69Updated 2 years ago
- ☆14Updated 3 years ago
- ☆11Updated 4 years ago
- ☆9Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 11 months ago
- An Attention Superoptimizer☆21Updated 5 months ago
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- The documents for TVM Unity☆8Updated 10 months ago
- The code for our paper "Neural Architecture Search as Program Transformation Exploration"☆18Updated 4 years ago
- ☆24Updated last year
- ☆18Updated 4 years ago
- Mille Crepe Bench: layer-wise performance analysis for deep learning frameworks.☆17Updated 5 years ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Updated 6 years ago
- ☆19Updated 3 years ago
- ☆13Updated 3 years ago
- ☆19Updated 8 months ago