gpu-mode / popcorn-cliLinks
☆39Updated last month
Alternatives and similar repositories for popcorn-cli
Users that are interested in popcorn-cli are comparing it to the libraries listed below
Sorting:
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆85Updated this week
- An experimental CPU backend for Triton☆147Updated 3 months ago
- ☆233Updated 3 weeks ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆55Updated this week
- ☆88Updated 10 months ago
- extensible collectives library in triton☆88Updated 5 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- Cataloging released Triton kernels.☆252Updated this week
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer(WIP) for Triton Kernels☆148Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆221Updated this week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆79Updated 2 months ago
- Collection of kernels written in Triton language☆154Updated 5 months ago
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆63Updated 2 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆69Updated last month
- Framework to reduce autotune overhead to zero for well known deployments.☆81Updated last week
- Fast low-bit matmul kernels in Triton☆357Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆219Updated 4 months ago
- Applied AI experiments and examples for PyTorch☆295Updated 3 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆268Updated last week
- ring-attention experiments☆150Updated 10 months ago
- How to ship your LLM generated kernels to PyTorch☆27Updated last week
- Ahead of Time (AOT) Triton Math Library☆76Updated last week
- A bunch of kernels that might make stuff slower 😉☆58Updated 2 weeks ago
- AI Tensor Engine for ROCm☆267Updated this week
- Fastest kernels written from scratch☆323Updated 5 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆120Updated last year
- High-Performance SGEMM on CUDA devices☆100Updated 7 months ago
- ☆230Updated last year