MegEngine / examples
A set of examples around MegEngine
☆31Updated last year
Alternatives and similar repositories for examples:
Users that are interested in examples are comparing it to the libraries listed below
- MegEngine到其他框架的转换器☆69Updated last year
- Offline Quantization Tools for Deploy.☆123Updated last year
- ☆34Updated last year
- 🐱 ncnn int8 模型量化评估☆12Updated 2 years ago
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆61Updated 2 years ago
- Pytorch implementation of RAPQ, IJCAI 2022☆21Updated last year
- ☆23Updated last year
- Slides with modifications for a course at Tsinghua University.☆58Updated 2 years ago
- ☆44Updated 3 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆86Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated last year
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆138Updated 2 years ago
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆42Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆204Updated 2 years ago
- ☆35Updated 4 months ago
- ☆28Updated 3 years ago
- ☆59Updated 7 months ago
- A codebase & model zoo for pretrained backbone based on MegEngine.☆33Updated last year
- 将MNN拆解的简易前向推理框架(for study!)☆20Updated 4 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- CUDA Templates for Linear Algebra Subroutines☆96Updated 9 months ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆115Updated last year
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆42Updated 3 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆47Updated last year
- CUDA 6大并行计算模式 代码与笔记☆60Updated 4 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆35Updated 2 years ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- Based of paper "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆62Updated 4 years ago
- ☆26Updated last year
- OneFlow->ONNX☆42Updated last year