OpenPPL / ppl.llm.kernel.cuda
☆133Updated 2 months ago
Related projects: ⓘ
- ☆140Updated 4 months ago
- ☆56Updated this week
- ☆123Updated 3 months ago
- ☆90Updated 6 months ago
- ☆70Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆20Updated last week
- ☆32Updated 3 months ago
- ☆95Updated 2 years ago
- ☆77Updated last year
- Yinghan's Code Sample☆272Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- play gemm with tvm☆81Updated last year
- code reading for tvm☆69Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆74Updated last year
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆266Updated last week
- ☆67Updated last week
- A Easy-to-understand TensorOp Matmul Tutorial☆265Updated this week
- ☆100Updated 5 months ago
- A fast communication-overlapping library for tensor parallelism on GPUs.☆184Updated this week
- Development repository for the Triton-Linalg conversion☆137Updated last month
- FlagGems is an operator library for large language models implemented in Triton Language.☆246Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆186Updated last month
- A tutorial for CUDA&PyTorch☆110Updated this week
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆100Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week
- A collection of memory efficient attention operators implemented in the Triton language.☆205Updated 3 months ago
- ☆48Updated 2 years ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆188Updated 3 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆159Updated 3 months ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆265Updated 2 years ago