bdhirsh / pytorch_open_registration_exampleLinks
Example of using pytorch's open device registration API
☆30Updated 2 years ago
Alternatives and similar repositories for pytorch_open_registration_example
Users that are interested in pytorch_open_registration_example are comparing it to the libraries listed below
Sorting:
- An extension library of WMMA API (Tensor Core API)☆106Updated last year
- Ahead of Time (AOT) Triton Math Library☆78Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated last year
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆143Updated 5 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- ☆99Updated last year
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆108Updated last year
- A home for the final text of all TVM RFCs.☆107Updated last year
- ☆63Updated 9 months ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆134Updated 2 years ago
- ☆145Updated 8 months ago
- Experimental projects related to TensorRT☆112Updated last week
- Assembler for NVIDIA Volta and Turing GPUs☆230Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 3 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆101Updated 7 years ago
- ☆150Updated 9 months ago
- ☆39Updated 5 years ago
- ☆148Updated 5 months ago
- play gemm with tvm☆92Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- ☆121Updated 9 months ago
- ☆47Updated this week
- ☆68Updated 2 years ago
- ☆14Updated 3 years ago
- High Performance Grouped GEMM in PyTorch☆30Updated 3 years ago
- llama INT4 cuda inference with AWQ