Ascend / tools
☆16Updated last year
Alternatives and similar repositories for tools:
Users that are interested in tools are comparing it to the libraries listed below
- [CVPR-2023] Towards Any Structural Pruning☆16Updated 2 years ago
- A codebase & model zoo for pretrained backbone based on MegEngine.☆33Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 10 months ago
- MegEngine到其他框架的转换器☆69Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- ☆99Updated 3 years ago
- ☆13Updated 2 years ago
- OneFlow->ONNX☆43Updated 2 years ago
- ☆69Updated 2 years ago
- This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.☆18Updated last year
- ☕️ A vscode extension for netron, support *.pdmodel, *.nb, *.onnx, *.pb, *.h5, *.tflite, *.pth, *.pt, *.mnn, *.param, etc.☆13Updated last year
- ☆23Updated last year
- ☆18Updated last year
- Wanwu models release, code will be released soon☆24Updated 2 years ago
- Whisper in TensorRT-LLM☆15Updated last year
- Guide to deploying deep-learning inference networks and deep vision primitives on Sophon TPU.☆35Updated last year
- symmetric int8 gemm☆67Updated 4 years ago
- Demonstration of the use of TensorRT and TRITON☆16Updated 4 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆22Updated last year
- MegEngine Official Documentation☆39Updated 4 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- A tool convert TensorRT engine/plan to a fake onnx☆38Updated 2 years ago
- Sandbox for TVM and playing around!☆22Updated 2 years ago
- Simple examples of using bazel to cross compile AI applicaions for armv7hf devices.☆25Updated 3 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 11 months ago
- Yet another Polyhedra Compiler for DeepLearning☆19Updated 2 years ago