sandeepkumar-skb / pytorch_custom_opLinks
End to End steps for adding custom ops in PyTorch.
☆24Updated 5 years ago
Alternatives and similar repositories for pytorch_custom_op
Users that are interested in pytorch_custom_op are comparing it to the libraries listed below
Sorting:
- ☆111Updated last year
- ☆88Updated 8 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Updated 7 months ago
- A tool for examining GPU scheduling behavior.☆91Updated last year
- An extension library of WMMA API (Tensor Core API)☆109Updated last year
- ☆84Updated 3 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Updated 5 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆158Updated 4 months ago
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆20Updated 7 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- ☆159Updated last year
- incubator repo for CUDA-TileIR backend☆97Updated 3 weeks ago
- MSLK (Meta Superintelligence Labs Kernels) is a collection of PyTorch GPU operator libraries that are designed and optimized for GenAI tr…☆45Updated this week
- ☆53Updated 9 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 8 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated this week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆138Updated 2 years ago
- ☆40Updated 5 years ago
- study of Ampere' Sparse Matmul☆18Updated 5 years ago
- Artifacts of EVT ASPLOS'24☆28Updated last year
- Dissecting NVIDIA GPU Architecture☆117Updated 3 years ago
- ☆102Updated last year
- CUDA Matrix Multiplication Optimization☆256Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- ☆173Updated 8 months ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆144Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago