sandeepkumar-skb / pytorch_custom_opLinks
End to End steps for adding custom ops in PyTorch.
☆23Updated 4 years ago
Alternatives and similar repositories for pytorch_custom_op
Users that are interested in pytorch_custom_op are comparing it to the libraries listed below
Sorting:
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆88Updated this week
- ☆96Updated last year
- An extension library of WMMA API (Tensor Core API)☆97Updated 10 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆62Updated 8 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆86Updated 5 months ago
- MLIR-based partitioning system☆86Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆93Updated 6 years ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆51Updated last year
- ☆109Updated 3 weeks ago
- extensible collectives library in triton☆87Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆109Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆79Updated 3 weeks ago
- ☆79Updated 6 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆182Updated 4 months ago
- DeeperGEMM: crazy optimized version☆69Updated 3 weeks ago
- Artifacts of EVT ASPLOS'24☆25Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated this week
- ☆38Updated 5 years ago
- llama INT4 cuda inference with AWQ☆54Updated 4 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆74Updated 2 weeks ago
- ☆59Updated last month
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆134Updated 4 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 10 months ago
- Optimize GEMM with tensorcore step by step☆26Updated last year
- A language and compiler for irregular tensor programs.☆137Updated 6 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆150Updated this week
- CUTLASS and CuTe Examples☆52Updated 5 months ago