yinuotxie / Efficient-LLM-Inferencing-on-GPUs
Penn CIS 5650 (GPU Programming and Architecture) Final Project
☆29Updated last year
Alternatives and similar repositories for Efficient-LLM-Inferencing-on-GPUs:
Users that are interested in Efficient-LLM-Inferencing-on-GPUs are comparing it to the libraries listed below
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- ☆88Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 3 weeks ago
- Optimize GEMM with tensorcore step by step☆24Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆101Updated 8 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆85Updated 6 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆57Updated 6 months ago
- ☆191Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆202Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆59Updated 2 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆178Updated last month
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆145Updated last month
- A low-latency & high-throughput serving engine for LLMs☆325Updated last month
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆50Updated last year
- play gemm with tvm☆89Updated last year
- ☆64Updated 3 months ago
- ☆46Updated 2 months ago
- ☆145Updated 2 months ago
- ☆87Updated last year
- Curated collection of papers in MoE model inference☆110Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated last week
- ☆87Updated 2 weeks ago
- ☆137Updated 8 months ago
- ☆139Updated 11 months ago
- Summary of some awesome work for optimizing LLM inference☆64Updated last week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆38Updated 7 months ago
- ☆64Updated 2 months ago