matrix97317 / OneNeuralNetwork
This is a cross-chip platform collection of operators and a unified neural network library.
☆12Updated 10 months ago
Related projects: ⓘ
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆74Updated last year
- CUDA 6大并行计算模式 代码与笔记☆57Updated 4 years ago
- ☆15Updated last week
- play gemm with tvm☆81Updated last year
- llama 2 Inference☆35Updated 10 months ago
- ☆52Updated this week
- Optimize GEMM with tensorcore step by step☆11Updated 9 months ago
- 分层解耦的深度学习推理引擎☆58Updated 3 weeks ago
- ☆77Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- ☆17Updated 5 months ago
- ☆95Updated 2 years ago
- ☆133Updated 2 months ago
- study of cutlass☆18Updated last year
- Triton Compiler related materials.☆27Updated 3 months ago
- ☆100Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆20Updated 2 weeks ago
- ☆32Updated 3 months ago
- My learning notes about AI, including Machine Learning and Deep Learning.☆18Updated 5 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆64Updated 5 years ago
- A tutorial for CUDA&PyTorch☆110Updated last week
- ☆14Updated 2 years ago
- ☆18Updated 5 months ago
- ☆18Updated 3 years ago
- study of Ampere' Sparse Matmul☆13Updated 3 years ago
- CVFusion is an open-source deep learning compiler to fuse the OpenCV operators.☆26Updated 2 years ago
- code reading for tvm☆69Updated 2 years ago
- A benchmark suited especially for deep learning operators☆40Updated last year
- examples for tvm schedule API☆97Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week