NVIDIA / HMM_sample_codeLinks
CUDA 12.2 HMM demos
☆20Updated last year
Alternatives and similar repositories for HMM_sample_code
Users that are interested in HMM_sample_code are comparing it to the libraries listed below
Sorting:
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated 2 months ago
- ☆50Updated 6 months ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Updated last year
- GPTQ inference TVM kernel☆40Updated last year
- ☆94Updated last year
- Benchmark tests supporting the TiledCUDA library.☆17Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- An extension library of WMMA API (Tensor Core API)☆108Updated last year
- A memory profiler for NVIDIA GPUs to explore memory inefficiencies in GPU-accelerated applications.☆26Updated last year
- An Attention Superoptimizer☆22Updated 10 months ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Updated 2 years ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated 3 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- Artifacts of EVT ASPLOS'24☆28Updated last year
- ☆65Updated 6 months ago
- GPU Performance Advisor☆65Updated 3 years ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- ☆37Updated 2 weeks ago
- ☆19Updated last year
- A practical way of learning Swizzle☆33Updated 9 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆189Updated 9 months ago
- ☆60Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 2 months ago
- ☆26Updated 9 months ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆30Updated 11 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Updated 5 years ago
- ☆77Updated 4 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆42Updated 3 years ago
- ☆50Updated last year