ARM-software / kleidiaiLinks
This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai
☆113Updated this week
Alternatives and similar repositories for kleidiai
Users that are interested in kleidiai are comparing it to the libraries listed below
Sorting:
- Open ABI and FFI for Machine Learning Systems☆333Updated this week
- An experimental CPU backend for Triton☆174Updated 2 months ago
- FlagTree is a unified compiler supporting multiple AI chip backends for custom Deep Learning operations, which is forked from triton-lang…☆200Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 8 months ago
- OpenAI Triton backend for Intel® GPUs☆226Updated last week
- ☆172Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Updated 6 months ago
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆60Updated 11 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Updated 7 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆105Updated 7 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆340Updated 7 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- triton for dsa☆57Updated last week
- Shared Middle-Layer for Triton Compilation☆325Updated 2 months ago
- ☆170Updated 2 years ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆564Updated 2 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- CUDA Matrix Multiplication Optimization☆256Updated last year
- AI Tensor Engine for ROCm☆351Updated this week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆250Updated this week
- An extension library of WMMA API (Tensor Core API)☆109Updated last year
- ☆164Updated last year
- Fast and memory-efficient exact attention☆111Updated last week
- AMD-SHARK Inference Modeling and Serving☆62Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆168Updated this week
- Tile-based language built for AI computation across all scales☆120Updated this week
- ☆104Updated last year