High Performance Grouped GEMM in PyTorch
☆31May 10, 2022Updated 3 years ago
Alternatives and similar repositories for pytorch_grouped_gemm
Users that are interested in pytorch_grouped_gemm are comparing it to the libraries listed below
Sorting:
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆144May 29, 2025Updated 9 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- ☆47Dec 13, 2024Updated last year
- ☆45Feb 27, 2026Updated last week
- ☆24Nov 22, 2022Updated 3 years ago
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆185Feb 19, 2026Updated 2 weeks ago
- performance engineering☆30Jul 11, 2024Updated last year
- Specialized Parallel Linear Algebra, providing distributed GEMM functionality for specific matrix distributions with optional GPU acceler…☆31Jun 26, 2024Updated last year
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆34Sep 17, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated last year
- ☆34Feb 3, 2025Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆128Jul 13, 2024Updated last year
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- ☆30Sep 4, 2023Updated 2 years ago
- ☆168Feb 5, 2026Updated last month
- The ASPLOS 2025 / EuroSys 2025 Contest Track☆40Aug 7, 2025Updated 7 months ago
- 详细双语注释版word2vec源码,well-annotated word2vec☆10Oct 3, 2021Updated 4 years ago
- extensible collectives library in triton☆96Mar 31, 2025Updated 11 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆144Aug 18, 2020Updated 5 years ago
- ☆159Dec 26, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- ☆262Jul 11, 2024Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,244Jul 29, 2023Updated 2 years ago
- Automating analysis from trace files☆63Updated this week
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- ☆40Feb 28, 2020Updated 6 years ago
- Balanced K-means in Pytorch with strong GPU acceleration☆12Apr 30, 2020Updated 5 years ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆48May 10, 2024Updated last year
- Performance benchmarking with ColossalAI☆39Jul 6, 2022Updated 3 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆58Aug 12, 2024Updated last year
- CPU and GPU tutorial examples☆13Apr 4, 2025Updated 11 months ago
- ☆14Feb 11, 2026Updated 3 weeks ago
- Code for the paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers" with GPT-J implementation.☆15Mar 22, 2023Updated 2 years ago