ARM-software / kleidiaiLinks
This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai
☆101Updated last week
Alternatives and similar repositories for kleidiai
Users that are interested in kleidiai are comparing it to the libraries listed below
Sorting:
- FlagTree is a unified compiler supporting multiple AI chip backends for custom Deep Learning operations, which is forked from triton-lang…☆140Updated this week
- OpenAI Triton backend for Intel® GPUs☆221Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆723Updated 4 months ago
- ☆170Updated 3 weeks ago
- An experimental CPU backend for Triton☆165Updated 3 weeks ago
- ☆166Updated 2 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆190Updated 10 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆88Updated this week
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆58Updated 9 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆135Updated 6 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆57Updated 3 years ago
- Shared Middle-Layer for Triton Compilation☆316Updated last month
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆70Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆102Updated 5 months ago
- CUDA Matrix Multiplication Optimization☆241Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆112Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- ☆51Updated this week
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆324Updated 5 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆235Updated 2 weeks ago
- A home for the final text of all TVM RFCs.☆108Updated last year
- llama INT4 cuda inference with AWQ☆55Updated 10 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 3 months ago
- Ahead of Time (AOT) Triton Math Library☆84Updated 3 weeks ago
- ☆158Updated 7 months ago
- ☆96Updated last year
- the original reference implementation of a specified llama.cpp backend for Qualcomm Hexagon NPU on Android phone, https://github.com/ggml…☆35Updated 4 months ago