Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Large Language Models.
☆103Jan 8, 2026Updated last month
Alternatives and similar repositories for compute-eval
Users that are interested in compute-eval are comparing it to the libraries listed below
Sorting:
- Personal solutions to the Triton Puzzles☆20Jul 18, 2024Updated last year
- ☆63Jul 14, 2025Updated 7 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- ☆12Aug 26, 2025Updated 6 months ago
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆61Feb 23, 2025Updated last year
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆115Jun 14, 2025Updated 8 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆820Updated this week
- ☆65Apr 26, 2025Updated 10 months ago
- ☆53Updated this week
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆48Jan 21, 2026Updated last month
- ☆32Jul 2, 2025Updated 7 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆43Nov 19, 2025Updated 3 months ago
- Autonomous GPU Kernel Generation & Optimization via Deep Agents☆242Updated this week
- Ship correct and fast LLM kernels to PyTorch☆142Jan 14, 2026Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- Ahead of Time (AOT) Triton Math Library☆92Updated this week
- Scalable and Stable Parallelization of Nonlinear RNNS☆29Oct 21, 2025Updated 4 months ago
- A high-performance attention mechanism that computes softmax normalization in a single streaming pass using running accumulators (online …☆29Oct 11, 2025Updated 4 months ago
- Automatic differentiation for Triton Kernels☆29Aug 12, 2025Updated 6 months ago
- Benchmarking guide for the Azure AI Infrastructure.☆40Feb 20, 2026Updated last week
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- An Open-Source RAG Workload Trace to Optimize RAG Serving Systems☆35Nov 18, 2025Updated 3 months ago
- ☆20Jan 14, 2022Updated 4 years ago
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆20May 29, 2018Updated 7 years ago
- GVProf: A Value Profiler for GPU-based Clusters☆52Mar 24, 2024Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- My Paper Reading Lists and Notes.☆21Feb 17, 2026Updated last week
- GPU Affinity is a package to automatically set the CPU process affinity to match the hardware architecture on a given platform☆29Dec 8, 2023Updated 2 years ago
- ☆21Mar 3, 2025Updated 11 months ago
- MSLK (Meta Superintelligence Labs Kernels) is a collection of PyTorch GPU operator libraries that are designed and optimized for GenAI tr…☆52Updated this week
- ☆177May 7, 2025Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆318Aug 22, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- CUDA Kernel Benchmarking Library☆820Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆74Feb 18, 2026Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆147May 10, 2025Updated 9 months ago
- Cataloging released Triton kernels.☆295Sep 9, 2025Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Jul 16, 2025Updated 7 months ago