ARM-software / kleidiai
This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai
☆37Updated this week
Alternatives and similar repositories for kleidiai
Users that are interested in kleidiai are comparing it to the libraries listed below
Sorting:
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago
- ☆65Updated 6 months ago
- ☆28Updated 3 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆84Updated this week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆61Updated 8 months ago
- llama INT4 cuda inference with AWQ☆54Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆106Updated 10 months ago
- DeeperGEMM: crazy optimized version☆69Updated last week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆33Updated 2 months ago
- ☆58Updated 2 weeks ago
- ☆146Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated 7 months ago
- ☆68Updated 3 months ago
- ☆32Updated this week
- ☆156Updated last month
- ☆11Updated 2 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- ☆84Updated last month
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- A practical way of learning Swizzle☆19Updated 3 months ago
- ☆32Updated this week
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆26Updated 4 months ago
- ☆31Updated last year
- 📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.☆171Updated last month
- RISCV C and Triton AI-Benchmark☆16Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆75Updated last month
- Fast Hadamard transform in CUDA, with a PyTorch interface☆185Updated 11 months ago
- SGEMM optimization with cuda step by step☆18Updated last year