Karbo123 / pytorch_grouped_gemmView external linksLinks
High Performance Grouped GEMM in PyTorch
☆31May 10, 2022Updated 3 years ago
Alternatives and similar repositories for pytorch_grouped_gemm
Users that are interested in pytorch_grouped_gemm are comparing it to the libraries listed below
Sorting:
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆143May 29, 2025Updated 8 months ago
- ☆42Jan 24, 2026Updated 3 weeks ago
- A CUDA implementation of KDTree in PyTorch☆30Jul 18, 2021Updated 4 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated 11 months ago
- ☆47Dec 13, 2024Updated last year
- ☆84Dec 2, 2022Updated 3 years ago
- ☆21Jan 15, 2026Updated last month
- ☆24Nov 22, 2022Updated 3 years ago
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 4 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆184Dec 16, 2025Updated last month
- performance engineering☆30Jul 11, 2024Updated last year
- Specialized Parallel Linear Algebra, providing distributed GEMM functionality for specific matrix distributions with optional GPU acceler…☆31Jun 26, 2024Updated last year
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆34Sep 17, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- ☆34Feb 3, 2025Updated last year
- ☆26Dec 3, 2025Updated 2 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆56Oct 27, 2025Updated 3 months ago
- ☆30Sep 4, 2023Updated 2 years ago
- ☆162Feb 5, 2026Updated last week
- The ASPLOS 2025 / EuroSys 2025 Contest Track☆39Aug 7, 2025Updated 6 months ago
- 详细双语注释版word2vec源码,well-annotated word2vec☆10Oct 3, 2021Updated 4 years ago
- extensible collectives library in triton☆95Mar 31, 2025Updated 10 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Aug 18, 2020Updated 5 years ago
- ☆158Dec 26, 2024Updated last year
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,239Jul 29, 2023Updated 2 years ago
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- ☆40Feb 28, 2020Updated 5 years ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆48May 10, 2024Updated last year
- Performance benchmarking with ColossalAI☆38Jul 6, 2022Updated 3 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆56Aug 12, 2024Updated last year
- ☆10Oct 26, 2016Updated 9 years ago