Oneflow-Inc / oneflow_convertLinks
OneFlow->ONNX
☆43Updated 2 years ago
Alternatives and similar repositories for oneflow_convert
Users that are interested in oneflow_convert are comparing it to the libraries listed below
Sorting:
- ☆11Updated last year
- ☆23Updated 2 years ago
- ☆18Updated last year
- ☆36Updated 7 months ago
- ☆58Updated 6 months ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- OneFlow Serving☆20Updated last month
- ☆17Updated last year
- ☆96Updated 3 years ago
- ☆138Updated last year
- MegEngine到其他框架的转换器☆69Updated 2 years ago
- ☆148Updated 4 months ago
- ☆29Updated 4 months ago
- study of cutlass☆21Updated 6 months ago
- ☆11Updated 2 months ago
- Common libraries for PPL projects☆29Updated 2 months ago
- symmetric int8 gemm☆66Updated 4 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆68Updated 9 months ago
- ☆99Updated 3 years ago
- ☆14Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- [CVPR-2023] Towards Any Structural Pruning☆16Updated 2 years ago
- Datasets, Transforms and Models specific to Computer Vision☆85Updated last year
- ☆127Updated 5 months ago
- ☆93Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆109Updated 8 months ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆37Updated 3 months ago