Samples for CUDA Developers which demonstrates features in CUDA Toolkit
☆8,953Jan 6, 2026Updated 2 months ago
Alternatives and similar repositories for cuda-samples
Users that are interested in cuda-samples are comparing it to the libraries listed below
Sorting:
- CUDA Library Samples☆2,346Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- CUDA Core Compute Libraries☆2,217Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,782Mar 9, 2026Updated last week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,821Oct 9, 2023Updated 2 years ago
- Development repository for the Triton language and compiler☆18,656Updated this week
- Optimized primitives for collective multi-GPU communication☆4,513Mar 8, 2026Updated last week
- ☆2,709Jan 16, 2024Updated 2 years ago
- [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl☆5,000Feb 8, 2024Updated 2 years ago
- how to optimize some algorithm in cuda.☆2,863Mar 11, 2026Updated last week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,872Mar 12, 2026Updated last week
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Sample codes for my CUDA programming book☆2,020Dec 14, 2025Updated 3 months ago
- CUDA Kernel Benchmarking Library☆830Updated this week
- Source code examples from the Parallel Forall Blog☆1,321Sep 23, 2025Updated 5 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- NCCL Tests☆1,459Mar 11, 2026Updated last week
- Learn CUDA Programming, published by Packt☆1,235Dec 30, 2023Updated 2 years ago
- Material for gpu-mode lectures☆5,841Feb 1, 2026Updated last month
- Fast and memory-efficient exact attention☆22,832Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆874Sep 26, 2025Updated 5 months ago
- Ongoing research training transformer models at scale☆15,647Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆643Apr 15, 2025Updated 11 months ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,247Jul 29, 2023Updated 2 years ago
- Open Machine Learning Compiler Framework☆13,197Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆530Sep 8, 2024Updated last year
- Lightning fast C++/CUDA neural network framework☆4,436Dec 14, 2025Updated 3 months ago
- A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology☆1,355Mar 12, 2026Updated last week
- This is a Chinese translation of the CUDA programming guide☆1,896Nov 13, 2024Updated last year
- ☆1,995Jul 29, 2023Updated 2 years ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,661Jan 22, 2026Updated last month
- [ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl☆2,308Feb 7, 2024Updated 2 years ago
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,745Aug 12, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,253Feb 27, 2026Updated 2 weeks ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆690Mar 11, 2026Updated last week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,426Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week