d2l-ai / d2l-tvm
Dive into Deep Learning Compiler
☆646Updated 2 years ago
Alternatives and similar repositories for d2l-tvm:
Users that are interested in d2l-tvm are comparing it to the libraries listed below
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆981Updated 6 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆706Updated 2 years ago
- TVM integration into PyTorch☆452Updated 5 years ago
- ☆192Updated 2 years ago
- row-major matmul optimization☆622Updated last year
- Place for meetup slides☆140Updated 4 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆858Updated 3 months ago
- ☆235Updated 2 years ago
- ☆410Updated this week
- Running BERT without Padding☆471Updated 3 years ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆840Updated this week
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 3 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,299Updated this week
- A model compilation solution for various hardware☆419Updated last week
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 2 years ago
- ☆205Updated 4 months ago
- common in-memory tensor structure☆978Updated last week
- heterogeneity-aware-lowering-and-optimization☆255Updated last year
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆482Updated 5 months ago
- An MLIR-based compiler framework bridges DSLs (domain-specific languages) to DSAs (domain-specific architectures).☆584Updated last week
- ☆411Updated 6 months ago
- A performant and modular runtime for TensorFlow☆759Updated this week
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆176Updated 2 years ago
- Yinghan's Code Sample☆320Updated 2 years ago
- ☆141Updated 2 months ago
- A home for the final text of all TVM RFCs.☆102Updated 6 months ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- CS294; AI For Systems and Systems For AI☆224Updated 5 years ago
- A simple high performance CUDA GEMM implementation.☆360Updated last year