Oneflow-Inc / serving
OneFlow Serving
☆20Updated 2 months ago
Alternatives and similar repositories for serving:
Users that are interested in serving are comparing it to the libraries listed below
- OneFlow->ONNX☆42Updated last year
- ☆23Updated last year
- ☆18Updated last year
- ☆12Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 9 months ago
- ☆15Updated 11 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆105Updated 6 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 5 months ago
- study of cutlass☆21Updated 4 months ago
- ☆84Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆59Updated last week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆34Updated this week
- ☆11Updated last year
- oneflow documentation☆68Updated 8 months ago
- CVFusion is an open-source deep learning compiler to fuse the OpenCV operators.☆29Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 2 weeks ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆18Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆66Updated 8 months ago
- GPTQ inference TVM kernel☆39Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 2 weeks ago
- ☆42Updated last month
- ☆72Updated 3 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆82Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆57Updated 7 months ago
- ☆11Updated last year