XPU-Forces / xpu_graphLinks
A torch compile backend for multi-targets
☆43Updated last week
Alternatives and similar repositories for xpu_graph
Users that are interested in xpu_graph are comparing it to the libraries listed below
Sorting:
- ☆164Updated 8 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆403Updated this week
- Development repository for the Triton-Linalg conversion☆212Updated 11 months ago
- ☆158Updated 2 months ago
- A lightweight design for computation-communication overlap.☆207Updated 2 weeks ago
- Yinghan's Code Sample☆363Updated 3 years ago
- Examples of CUDA implementations by Cutlass CuTe☆264Updated 6 months ago
- ☆255Updated last year
- Shared Middle-Layer for Triton Compilation☆321Updated last month
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- ☆156Updated last year
- ☆119Updated 9 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆513Updated last year
- Tile-based language built for AI computation across all scales☆115Updated 2 weeks ago
- ☆104Updated last year
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆398Updated last year
- ☆152Updated last year
- ☆112Updated 7 months ago
- Github mirror of trition-lang/triton repo.☆113Updated last week
- ☆153Updated last year
- collection of benchmarks to measure basic GPU capabilities☆475Updated 2 months ago
- Allow torch tensor memory to be released and resumed later☆196Updated last month
- nnScaler: Compiling DNN models for Parallel Training☆123Updated 3 months ago
- ☆192Updated 2 years ago
- ☆70Updated last year
- A home for the final text of all TVM RFCs.☆108Updated last year
- Open ABI and FFI for Machine Learning Systems☆293Updated this week
- ☆84Updated 3 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆191Updated 11 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year