Karbo123 / pytorch_grouped_gemm
High Performance Grouped GEMM in PyTorch
☆24Updated 2 years ago
Alternatives and similar repositories for pytorch_grouped_gemm:
Users that are interested in pytorch_grouped_gemm are comparing it to the libraries listed below
- ☆67Updated last month
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆174Updated 2 months ago
- nnScaler: Compiling DNN models for Parallel Training☆87Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆58Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, achieve peak⚡️ performance☆43Updated last week
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated 5 months ago
- ☆72Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆87Updated 10 months ago
- ☆79Updated 4 months ago
- Examples of CUDA implementations by Cutlass CuTe☆128Updated last month
- play gemm with tvm☆85Updated last year
- ☆94Updated last month
- ☆178Updated 6 months ago
- GPU TopK Benchmark☆14Updated last month
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆122Updated 4 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆85Updated 2 years ago
- ☆38Updated 7 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 5 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆16Updated this week
- ☆70Updated 3 years ago
- ☆28Updated 3 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆76Updated 2 months ago
- Quantized Attention on GPU☆34Updated 2 months ago
- ☆39Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆99Updated 4 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆117Updated 2 years ago
- ☆134Updated 6 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆54Updated 4 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆94Updated 6 months ago
- Github mirror of trition-lang/triton repo.☆19Updated this week