MegEngine / MegRay
A communication library for deep learning
☆50Updated 7 months ago
Alternatives and similar repositories for MegRay:
Users that are interested in MegRay are comparing it to the libraries listed below
- MegEngine Documentations☆44Updated 4 years ago
- 基于旷视研究院领先的深度学习算法,提供满足多业务场景的预训练模型☆91Updated 10 months ago
- Benchmark of TVM quantized model on CUDA☆112Updated 4 years ago
- symmetric int8 gemm☆66Updated 4 years ago
- MegEngine到其他框架的转换器☆69Updated last year
- OneFlow->ONNX☆42Updated last year
- heterogeneity-aware-lowering-and-optimization☆255Updated last year
- Place for meetup slides☆140Updated 4 years ago
- Common libraries for PPL projects☆29Updated 3 weeks ago
- A Computation Graph Virtual Machine based ML Framework☆108Updated last year
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆64Updated 3 years ago
- TensorFlow and TVM integration☆38Updated 4 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆196Updated 2 years ago
- ☆23Updated last year
- ☆95Updated 3 years ago
- ☆36Updated 5 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆82Updated 2 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆69Updated 5 years ago
- CVFusion is an open-source deep learning compiler to fuse the OpenCV operators.☆29Updated 2 years ago
- 动手学习TVM核心原理教程☆61Updated 4 years ago
- TVM tutorial☆66Updated 6 years ago
- OneFlow models for benchmarking.☆105Updated 7 months ago
- Tengine gemm tutorial, step by step☆13Updated 4 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Fast CUDA Kernels for ResNet Inference.☆173Updated 5 years ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆141Updated this week
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated 2 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago