leimao / Nsight-Systems-Docker-ImageLinks
Nsight Systems In Docker
☆20Updated last year
Alternatives and similar repositories for Nsight-Systems-Docker-Image
Users that are interested in Nsight-Systems-Docker-Image are comparing it to the libraries listed below
Sorting:
- Open Source Projects from Pallas Lab☆20Updated 3 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆16Updated 5 months ago
- llama INT4 cuda inference with AWQ☆54Updated 5 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆93Updated this week
- ☆31Updated 5 months ago
- ☆152Updated 2 years ago
- ☆11Updated 4 months ago
- ☆35Updated this week
- A curated list for Efficient Large Language Models☆11Updated last year
- ☆77Updated 5 months ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆149Updated 2 weeks ago
- ☆71Updated 8 months ago
- GPTQ inference TVM kernel☆40Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated last month
- ☆19Updated 9 months ago
- RISCV C and Triton AI-Benchmark☆19Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆87Updated 2 months ago
- Model compression for ONNX☆96Updated 8 months ago
- ☆69Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆112Updated last year
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Updated 2 years ago
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- A Winograd Minimal Filter Implementation in CUDA☆25Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆79Updated last week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆63Updated 10 months ago
- ☆36Updated last year