xiaoyi018 / simple_gemm
☆22Updated 3 years ago
Alternatives and similar repositories for simple_gemm:
Users that are interested in simple_gemm are comparing it to the libraries listed below
- ☆95Updated 3 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- play gemm with tvm☆89Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month
- ☆124Updated last year
- symmetric int8 gemm☆66Updated 4 years ago
- ☆10Updated 3 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 3 weeks ago
- ☆87Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆179Updated 2 months ago
- ☆58Updated 4 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Prefill Attention with O(1) GPU SRAM complexity for headdim > 256, ~2x↑🎉vs SDPA EA.☆157Updated this week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆38Updated 7 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆90Updated last month
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆85Updated 6 years ago
- simplify >2GB large onnx model☆54Updated 4 months ago
- ☆114Updated last year
- ☆36Updated 5 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- ☆71Updated 2 years ago
- ☆60Updated 2 months ago
- 分层解耦的深度学习推理引擎☆72Updated last month
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆47Updated 4 years ago
- ☆19Updated this week
- ☆139Updated 11 months ago
- A practical way of learning Swizzle☆16Updated last month
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆69Updated 5 years ago
- GPTQ inference TVM kernel☆38Updated 11 months ago