sandeepkumar-skb / pytorch_custom_opLinks
End to End steps for adding custom ops in PyTorch.
☆23Updated 5 years ago
Alternatives and similar repositories for pytorch_custom_op
Users that are interested in pytorch_custom_op are comparing it to the libraries listed below
Sorting:
- ☆108Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆98Updated 3 months ago
- An extension library of WMMA API (Tensor Core API)☆106Updated last year
- A tool for examining GPU scheduling behavior.☆88Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 8 months ago
- ☆83Updated 2 years ago
- ☆57Updated 4 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- Distributed MoE in a Single Kernel [NeurIPS '25]☆85Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆120Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆138Updated last month
- ☆124Updated 9 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆143Updated 5 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Github mirror of trition-lang/triton repo.☆84Updated last week
- ☆92Updated 11 months ago
- An experimental CPU backend for Triton☆153Updated this week
- ☆39Updated 5 years ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆72Updated last week
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- ☆148Updated 5 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- Artifacts of EVT ASPLOS'24☆26Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆53Updated last year
- TVM FFI☆67Updated last week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated last month
- ☆32Updated 2 years ago
- ☆45Updated 5 months ago