GEMM by WMMA (tensor core)
☆15Jul 31, 2022Updated 3 years ago
Alternatives and similar repositories for GEMM_WMMA
Users that are interested in GEMM_WMMA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- tensorflow fork with Salus integration☆12Jan 7, 2022Updated 4 years ago
- Discovery of Structured Parallelism In Sequential and Parallel Code☆10Feb 13, 2021Updated 5 years ago
- ☆16Jan 14, 2025Updated last year
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- An extension library of WMMA API (Tensor Core API)☆112Jul 12, 2024Updated last year
- Triangle Counting for the GPU using CUDA.☆14Nov 5, 2015Updated 10 years ago
- ☆174Feb 5, 2026Updated 3 months ago
- ☆19Feb 25, 2026Updated 2 months ago
- ☆12Apr 30, 2024Updated 2 years ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆11Jul 27, 2024Updated last year
- Parallel Prefix Sum (Scan) with CUDA☆29Jun 22, 2024Updated last year
- Awesome-Parallel-Reasoning: Unlocking the reasoning potential of LLMs. Papers, Code, Resources & Survey.☆52Mar 8, 2026Updated last month
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆15Jul 13, 2020Updated 5 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- Fast GPU based tensor core reductions☆13Jan 13, 2023Updated 3 years ago
- 数据库内核笔记☆13Aug 18, 2022Updated 3 years ago
- Estimating hardware and cloud costs of LLMs and transformer projects☆21Apr 1, 2026Updated last month
- Large DNNs training framework for consumer GPUs☆70Updated this week
- CUDA C simple application for Nvidia's GPU☆11Jun 7, 2022Updated 3 years ago
- ☆13Nov 25, 2019Updated 6 years ago
- ☆13Apr 30, 2024Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- attempt at summarizing Raft in one page of pseudo-code☆20Mar 5, 2018Updated 8 years ago
- GEMV implementation with CUTLASS☆21Aug 21, 2025Updated 8 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Apr 11, 2022Updated 4 years ago
- ☆18Apr 22, 2026Updated last week
- LLVM based assembler for x86, Arm, Mips, PowerPC, Sparc and SystemZ (Rust API)☆20Apr 14, 2016Updated 10 years ago
- ☆17Jan 24, 2024Updated 2 years ago
- ☆11May 2, 2023Updated 3 years ago
- setup pytorch on android☆12Mar 2, 2020Updated 6 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆12Jun 10, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- socket program to send data with encryption☆13Jun 1, 2021Updated 4 years ago
- ☆24Jun 12, 2023Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- Federated Learning - PyTorch☆15Jun 27, 2021Updated 4 years ago
- 棒棒哒攻略:Developer's Technical Documents, API References, Code Examples, Quick Starts, Programming minutebooks, and Tutorials. https://aweso…☆12Apr 17, 2019Updated 7 years ago
- CUDA project for uni subject☆26Oct 26, 2020Updated 5 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆92Nov 23, 2022Updated 3 years ago