sandeepkumar-skb / pytorch_custom_opLinks
End to End steps for adding custom ops in PyTorch.
☆23Updated 4 years ago
Alternatives and similar repositories for pytorch_custom_op
Users that are interested in pytorch_custom_op are comparing it to the libraries listed below
Sorting:
- An extension library of WMMA API (Tensor Core API)☆99Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- ☆102Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆112Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 10 months ago
- ☆94Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- Artifacts of EVT ASPLOS'24☆26Updated last year
- MLIR-based partitioning system☆103Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆86Updated 2 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆138Updated 4 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆63Updated 10 months ago
- ☆49Updated last month
- ☆50Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- ☆83Updated 8 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆187Updated this week
- ☆39Updated 5 years ago
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- DeeperGEMM: crazy optimized version☆69Updated 2 months ago
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆91Updated this week
- Complete GPU residency for ML.☆31Updated last week
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆52Updated last year
- ☆216Updated last year
- ☆123Updated 2 months ago
- llama INT4 cuda inference with AWQ☆54Updated 5 months ago
- extensible collectives library in triton☆87Updated 3 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago