Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Large Language Models.
☆116Mar 17, 2026Updated this week
Alternatives and similar repositories for compute-eval
Users that are interested in compute-eval are comparing it to the libraries listed below
Sorting:
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆122Jun 14, 2025Updated 9 months ago
- ☆64Jul 14, 2025Updated 8 months ago
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆64Feb 23, 2025Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated last month
- ☆32Jul 2, 2025Updated 8 months ago
- Autonomous GPU Kernel Generation & Optimization via Deep Agents☆309Mar 10, 2026Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆869Mar 9, 2026Updated last week
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- A high-performance attention mechanism that computes softmax normalization in a single streaming pass using running accumulators (online …☆29Oct 11, 2025Updated 5 months ago
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- Benchmarking guide for the Azure AI Infrastructure.☆40Feb 20, 2026Updated last month
- ☆53Feb 24, 2026Updated 3 weeks ago
- ☆39Dec 14, 2025Updated 3 months ago
- GPU Affinity is a package to automatically set the CPU process affinity to match the hardware architecture on a given platform☆29Dec 8, 2023Updated 2 years ago
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆20May 29, 2018Updated 7 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- Ship correct and fast LLM kernels to PyTorch☆145Jan 14, 2026Updated 2 months ago
- WG21 Proposals and Drafts☆14Jan 6, 2022Updated 4 years ago
- ☆44Mar 13, 2026Updated last week
- Automatic differentiation for Triton Kernels☆29Aug 12, 2025Updated 7 months ago
- MSLK (Meta Superintelligence Labs Kernels) is a collection of PyTorch GPU operator libraries that are designed and optimized for GenAI tr…☆71Updated this week
- General purpose, language-agnostic Continuous Benchmarking (CB) framework☆35Apr 15, 2020Updated 5 years ago
- LLM-DSE: Searching Accelerator Parameters with LLM Agents☆13May 22, 2025Updated 9 months ago
- ☆17Apr 7, 2025Updated 11 months ago
- Write a fast kernel and see how you compare against the best humans and AI on gpumode.com☆88Updated this week
- Scalable and Stable Parallelization of Nonlinear RNNS☆29Mar 6, 2026Updated 2 weeks ago
- My Paper Reading Lists and Notes.☆21Mar 13, 2026Updated last week
- CUDA Kernel Benchmarking Library☆831Updated this week
- Composable and efficient abstractions for iterating multidimensional spaces in C++☆10Nov 22, 2023Updated 2 years ago
- FPGA-based HyperLogLog Accelerator☆12Jul 13, 2020Updated 5 years ago
- MPI Code Generation through Domain-Specific Language Models☆15Nov 19, 2024Updated last year
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 6 months ago
- ☆183May 7, 2025Updated 10 months ago