leimao / Nsight-Compute-Docker-Image
Nsight Compute In Docker
☆11Updated last year
Alternatives and similar repositories for Nsight-Compute-Docker-Image:
Users that are interested in Nsight-Compute-Docker-Image are comparing it to the libraries listed below
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆63Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 6 months ago
- ☆10Updated 3 weeks ago
- A practical way of learning Swizzle☆16Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 3 weeks ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆74Updated this week
- ☆23Updated last month
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆90Updated last month
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆28Updated last year
- Benchmark tests supporting the TiledCUDA library.☆15Updated 4 months ago
- DeeperGEMM: crazy optimized version☆63Updated 2 weeks ago
- ☆19Updated 6 months ago
- GPTQ inference TVM kernel☆38Updated 11 months ago
- ☆10Updated this week
- Fast and memory-efficient exact attention☆57Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆85Updated 6 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated last week
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated this week
- study of cutlass☆21Updated 4 months ago
- ☆75Updated this week
- OneFlow Serving☆20Updated 3 months ago
- ☆63Updated this week
- 分层解耦的深度学习推理引擎☆72Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month
- Quantized Attention on GPU☆45Updated 4 months ago
- ☆67Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆58Updated 6 months ago