Ascend / torchairLinks
☆18Updated last week
Alternatives and similar repositories for torchair
Users that are interested in torchair are comparing it to the libraries listed below
Sorting:
- ☆96Updated 6 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆76Updated 3 weeks ago
- ☆100Updated last year
- ☆65Updated 5 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆68Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated last year
- GPTQ inference TVM kernel☆39Updated last year
- A Triton JIT runtime and ffi provider in C++☆26Updated this week
- Tile-based language built for AI computation across all scales☆66Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆120Updated 5 months ago
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆19Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆138Updated last month
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 8 months ago
- A practical way of learning Swizzle☆28Updated 8 months ago
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆90Updated this week
- A lightweight design for computation-communication overlap.☆181Updated last week
- An experimental communicating attention kernel based on DeepEP.☆34Updated 2 months ago
- ☆148Updated 7 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 7 months ago
- ☆91Updated 2 weeks ago
- Pipeline Parallelism Emulation and Visualization☆67Updated 4 months ago
- A simple calculation for LLM MFU.☆48Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆98Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆112Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 4 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated last month
- ☆100Updated 5 months ago
- PyTorch distributed training acceleration framework☆52Updated 2 months ago