Stepwise optimizations of DGEMM on CPU, reaching performance faster than Intel MKL eventually, even under multithreading.
☆163Feb 3, 2022Updated 4 years ago
Alternatives and similar repositories for Optimizing-DGEMM-on-Intel-CPUs-with-AVX512F
Users that are interested in Optimizing-DGEMM-on-Intel-CPUs-with-AVX512F are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SGEMM and DGEMM subroutines using AVX512F instructions.☆15May 22, 2022Updated 3 years ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆416Jan 2, 2025Updated last year
- row-major matmul optimization☆723Feb 24, 2026Updated 2 months ago
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆14Apr 3, 2025Updated last year
- ☆18Apr 8, 2022Updated 4 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Repository for HPCGame 1st Problems.☆71Feb 6, 2024Updated 2 years ago
- A simple high performance CUDA GEMM implementation.☆434Jan 4, 2024Updated 2 years ago
- ☆19Apr 6, 2024Updated 2 years ago
- The Zaychik Power Controller server☆13Apr 13, 2024Updated 2 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Dec 8, 2017Updated 8 years ago
- BLISlab: A Sandbox for Optimizing GEMM☆563Jun 17, 2021Updated 4 years ago
- ☆29Apr 18, 2024Updated 2 years ago
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆21May 24, 2025Updated 11 months ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆73Apr 15, 2019Updated 7 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Flash Attention in raw Cuda C beating PyTorch☆38May 14, 2024Updated last year
- ☆10Mar 2, 2024Updated 2 years ago
- ☆40Feb 28, 2020Updated 6 years ago
- 上海交通大学Xflops超算队2024招新第一轮考核试题☆14Oct 15, 2024Updated last year
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 7 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆544Sep 8, 2024Updated last year
- ☆26Apr 5, 2026Updated last month
- A CPU tool for benchmarking the peak of floating points☆581Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,298Jul 29, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Sparse Matrix-Vector Multiplication implementations in C☆22Dec 7, 2022Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆272Jul 1, 2025Updated 10 months ago
- Library with JIT (Just-in-time) compilation support to optimize performance of small and medium matrix multiplication☆14Apr 27, 2021Updated 5 years ago
- 2023 XFlops Training☆13Jan 23, 2024Updated 2 years ago
- ☆150Jan 9, 2025Updated last year
- A direct convolution library targeting ARM multi-core CPUs.☆12Nov 27, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- DGEMM on KNL, achieve 75% MKL☆19May 19, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- HUST-CS-2019 硬件综合训练-组原课设-riscv实现☆16Nov 3, 2022Updated 3 years ago
- Xiao's CUDA Optimization Guide [NO LONGER ADDING NEW CONTENT]☆325Nov 8, 2022Updated 3 years ago
- ☆28Aug 9, 2025Updated 9 months ago
- Homework of CMU 10-414/714: Deep Learning Systems (https://dlsyscourse.org/)☆15Mar 21, 2024Updated 2 years ago
- 大规模并行处理器编程实战 第二版答案☆36Jun 4, 2022Updated 3 years ago
- Spack package repository maintained by Student Cluster Competition Team @ Sun Yat-sen University.☆16Aug 20, 2025Updated 8 months ago