leimao / Nsight-Compute-Docker-ImageLinks
Nsight Compute In Docker
☆12Updated last year
Alternatives and similar repositories for Nsight-Compute-Docker-Image
Users that are interested in Nsight-Compute-Docker-Image are comparing it to the libraries listed below
Sorting:
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆124Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 4 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆33Updated 2 years ago
- llama INT4 cuda inference with AWQ☆55Updated 9 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Updated 2 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- A Triton JIT runtime and ffi provider in C++☆27Updated last week
- A practical way of learning Swizzle☆30Updated 8 months ago
- GPTQ inference TVM kernel☆39Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆79Updated last month
- ☆33Updated 8 months ago
- ☆97Updated 7 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆67Updated last year
- ☆100Updated last year
- ☆71Updated 7 months ago
- 分层解耦的深度学习推理引擎☆76Updated 8 months ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆50Updated last year
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆14Updated 7 months ago
- ☆39Updated last week
- ☆18Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 9 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆56Updated 3 years ago
- ☆12Updated 9 months ago
- Fast and memory-efficient exact attention☆96Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆117Updated last year