CUDA SGEMM optimization note
☆15Oct 31, 2023Updated 2 years ago
Alternatives and similar repositories for cuda-sgemm-optimization
Users that are interested in cuda-sgemm-optimization are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 🎉My Collections of CUDA Kernels~☆10Jun 25, 2024Updated last year
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- TileGraph is an experimental DNN compiler that utilizes static code generation and kernel fusion techniques.☆11Sep 18, 2024Updated last year
- ☆42Mar 4, 2026Updated 3 weeks ago
- CUDA project for uni subject☆26Oct 26, 2020Updated 5 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Immix GC for LLVM based languages☆15Apr 2, 2025Updated 11 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Jun 10, 2024Updated last year
- Implement FlashAttention v2 with minimal code to learn.☆15Jun 12, 2024Updated last year
- ☆12Feb 7, 2018Updated 8 years ago
- ☆14Oct 9, 2022Updated 3 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆172Feb 11, 2026Updated last month
- Step-by-step optimization of CUDA SGEMM☆448Mar 30, 2022Updated 3 years ago
- ☆15Apr 28, 2023Updated 2 years ago
- ☆17Apr 9, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- E-Graph library☆22Apr 4, 2024Updated last year
- ☆27May 27, 2024Updated last year
- Calibration of depth sensors, e.g. Kinect, Asus Xtion☆13Apr 26, 2019Updated 6 years ago
- NetHCF: Enabling Line-rate and Adaptive Spoofed IP Traffic Filtering☆13Mar 17, 2022Updated 4 years ago
- ☆15Dec 1, 2023Updated 2 years ago
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- A graph coloring register allocator for LLVM.☆11Jan 23, 2017Updated 9 years ago
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 6 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- 5th place solution in "NIPS 2017: Non-targeted Adversarial Attack" (with solution in targeted attack and defence)☆10Nov 14, 2017Updated 8 years ago
- Acclaim: Adaptive Memory Reclaim to Improve User Experience in Android Systems [ATC '20]☆16Aug 1, 2020Updated 5 years ago
- LLM inference in C/C++☆20Oct 22, 2025Updated 5 months ago
- ☆14Jan 10, 2020Updated 6 years ago
- Time Series Change Point Detection based on Contrastive Predictive Coding pytorch implementation☆12Oct 20, 2022Updated 3 years ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆171Nov 11, 2025Updated 4 months ago
- OSDI 2023 Welder, deeplearning compiler☆33Nov 24, 2023Updated 2 years ago
- Nano vLLM☆13Jun 26, 2025Updated 8 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- Md5 碰撞生成实现,去掉了boost依赖,简化编译☆12Jan 25, 2018Updated 8 years ago
- ☆20May 24, 2025Updated 10 months ago
- A concurrent LRU cache.☆23Feb 14, 2021Updated 5 years ago
- Embedding-based real-time change point detection with application to activity segmentation in smart home time series data☆16Nov 20, 2022Updated 3 years ago
- DiscreteTom's Blog Boilerplate.☆10Mar 6, 2023Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Aug 31, 2023Updated 2 years ago