yzhaiustc / Optimizing-DGEMM-on-Intel-CPUs-with-AVX512FView external linksLinks
Stepwise optimizations of DGEMM on CPU, reaching performance faster than Intel MKL eventually, even under multithreading.
☆163Feb 3, 2022Updated 4 years ago
Alternatives and similar repositories for Optimizing-DGEMM-on-Intel-CPUs-with-AVX512F
Users that are interested in Optimizing-DGEMM-on-Intel-CPUs-with-AVX512F are comparing it to the libraries listed below
Sorting:
- SGEMM and DGEMM subroutines using AVX512F instructions.☆15May 22, 2022Updated 3 years ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆407Jan 2, 2025Updated last year
- My gemm optimization on RPi (ARM) achieved a 170x performance boost, showing speeds faster than Eigen and close to OpenBLAS.☆15Nov 17, 2024Updated last year
- ☆1,990Jul 29, 2023Updated 2 years ago
- row-major matmul optimization☆701Aug 20, 2025Updated 5 months ago
- ☆19Apr 6, 2024Updated last year
- A simple high performance CUDA GEMM implementation.☆426Jan 4, 2024Updated 2 years ago
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆13Apr 3, 2025Updated 10 months ago
- Repository for HPCGame 1st Problems.☆71Feb 6, 2024Updated 2 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Dec 8, 2017Updated 8 years ago
- BLISlab: A Sandbox for Optimizing GEMM☆557Jun 17, 2021Updated 4 years ago
- The Zaychik Power Controller server☆13Apr 13, 2024Updated last year
- ☆18Apr 8, 2022Updated 3 years ago
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆21May 24, 2025Updated 8 months ago
- Flash Attention in raw Cuda C beating PyTorch☆37May 14, 2024Updated last year
- ☆22May 15, 2021Updated 4 years ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 4 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆523Sep 8, 2024Updated last year
- ☆40Feb 28, 2020Updated 5 years ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,239Jul 29, 2023Updated 2 years ago
- The repository maintains the source code for the article titled "Optimizing Attention by Exploiting Data Reuse on ARM Multi-core CPUs."☆15Dec 1, 2024Updated last year
- Sparse Matrix-Vector Multiplication implementations in C☆22Dec 7, 2022Updated 3 years ago
- ☆26Apr 2, 2025Updated 10 months ago
- A CPU tool for benchmarking the peak of floating points☆577Feb 7, 2026Updated last week
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆73Apr 15, 2019Updated 6 years ago
- 2023 XFlops Training☆13Jan 23, 2024Updated 2 years ago
- ☆26Aug 9, 2025Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated 11 months ago
- ☆152Jan 9, 2025Updated last year
- Step-by-step optimization of CUDA SGEMM☆431Mar 30, 2022Updated 3 years ago
- 上海交通大学Xflops超算队2024招新第一轮考核试题☆14Oct 15, 2024Updated last year
- Library with JIT (Just-in-time) compilation support to optimize performance of small and medium matrix multiplication☆14Apr 27, 2021Updated 4 years ago
- ☆42Jan 24, 2026Updated 3 weeks ago
- Fast CUDA matrix multiplication from scratch☆1,052Sep 2, 2025Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆128Jul 13, 2024Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆270Jul 1, 2025Updated 7 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- ☆114May 16, 2025Updated 9 months ago
- ☆15Jun 26, 2024Updated last year