mlcommons / inference_results_v2.1
This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.
☆18Updated last year
Related projects: ⓘ
- OneFlow->ONNX☆41Updated last year
- ☆52Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- ☆33Updated 2 years ago
- OneFlow Serving☆20Updated 7 months ago
- MegEngine到其他框架的转换器☆67Updated last year
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated last year
- A set of examples around MegEngine☆29Updated 9 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week
- play gemm with tvm☆81Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆46Updated last month
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- ☆23Updated last week
- ☆77Updated last year
- Inference of quantization aware trained networks using TensorRT☆77Updated last year
- ☆18Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆20Updated 2 weeks ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated 9 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆74Updated last year
- ☆22Updated last year
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- ☆66Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated last year
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- My learning notes about AI, including Machine Learning and Deep Learning.☆18Updated 5 years ago
- ☆140Updated 4 months ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- ☆32Updated 3 months ago
- ☆17Updated 5 months ago
- ☆67Updated last week