anilshanbhag / gpu-topkView external linksLinks
Efficient Top-K implementation on the GPU
☆193Apr 9, 2019Updated 6 years ago
Alternatives and similar repositories for gpu-topk
Users that are interested in gpu-topk are comparing it to the libraries listed below
Sorting:
- A way to use cuda to accelerate top k algorithm☆30Jul 11, 2017Updated 8 years ago
- ☆14Sep 14, 2021Updated 4 years ago
- Parallel selection on GPUs☆15Mar 23, 2021Updated 4 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Multiple 1-stencil implementations using nvidia cuda.☆13Dec 2, 2017Updated 8 years ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆26Jan 22, 2026Updated 3 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆105Jul 27, 2018Updated 7 years ago
- ☆114May 16, 2025Updated 8 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- ☆158Dec 26, 2024Updated last year
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,818Oct 9, 2023Updated 2 years ago
- Scalable radix top-k selection on GPUs.☆21Jan 27, 2025Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆252May 6, 2025Updated 9 months ago
- Writing a CUDA software ray tracing renderer with Analysis-Driven Optimization from scratch: a python-importable, distributed parallel re…☆37Oct 5, 2025Updated 4 months ago
- Flash Attention in raw Cuda C beating PyTorch☆37May 14, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- GPU TopK Benchmark☆18Dec 19, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- ☆1,988Jul 29, 2023Updated 2 years ago
- ☆32Aug 24, 2022Updated 3 years ago
- SONG: Approximate Nearest Neighbor Search on GPU. SONG is a graph-based approximate nearest neighbor search toolbox.☆72Apr 29, 2025Updated 9 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆251Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- Yinghan's Code Sample☆365Jul 25, 2022Updated 3 years ago
- ☆20Dec 24, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- CUDA Core Compute Libraries☆2,170Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,266Updated this week
- study of cutlass☆22Nov 10, 2024Updated last year
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Aug 18, 2020Updated 5 years ago
- Some source code about matrix multiplication implementation on CUDA☆34Sep 12, 2018Updated 7 years ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 4 months ago
- Efficient triton implementation of Native Sparse Attention.☆263May 23, 2025Updated 8 months ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆407Jan 2, 2025Updated last year
- QuickerADC is an implementation of highly-efficient product quantizers leveraging SIMD shuffle instructions integrated into FAISS☆61Jan 3, 2019Updated 7 years ago
- a CUDA implementation of a priority queue☆84Sep 18, 2020Updated 5 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year