d2l-ai / d2l-tvmLinks
Dive into Deep Learning Compiler
☆645Updated 3 years ago
Alternatives and similar repositories for d2l-tvm
Users that are interested in d2l-tvm are comparing it to the libraries listed below
Sorting:
- The Tensor Algebra SuperOptimizer for Deep Learning☆736Updated 2 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,005Updated last year
- TVM integration into PyTorch☆456Updated 6 years ago
- Place for meetup slides☆140Updated 5 years ago
- ☆192Updated 2 years ago
- ☆250Updated 5 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆914Updated last year
- ☆423Updated 2 weeks ago
- row-major matmul optimization☆698Updated 5 months ago
- Guide for building custom op for TensorFlow☆385Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,516Updated this week
- heterogeneity-aware-lowering-and-optimization☆257Updated 2 years ago
- CS294; AI For Systems and Systems For AI☆227Updated 6 years ago
- A model compilation solution for various hardware☆461Updated 5 months ago
- Symbolic Expression and Statement Module for new DSLs☆205Updated 5 years ago
- ☆601Updated 7 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- A home for the final text of all TVM RFCs.☆109Updated last year
- BLISlab: A Sandbox for Optimizing GEMM☆554Updated 4 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆489Updated last year
- ☆622Updated last month
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆965Updated this week
- This is an implementation of sgemm_kernel on L1d cache.☆233Updated last year
- A performant and modular runtime for TensorFlow☆755Updated 4 months ago
- examples for tvm schedule API☆101Updated 2 years ago
- ☆218Updated last year
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆535Updated 3 years ago
- ☆145Updated 11 months ago
- tensorflow源码阅读笔记☆192Updated 7 years ago