Mixed precision training from scratch with Tensors and CUDA
☆29May 14, 2024Updated last year
Alternatives and similar repositories for mixed-precision-from-scratch
Users that are interested in mixed-precision-from-scratch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆12Dec 22, 2024Updated last year
- ☆15Mar 30, 2024Updated 2 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆142Aug 26, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Triton kernels for Flux☆23Jul 7, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- BFloat16 Fused Adam Operator for PyTorch☆19Nov 16, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- Build CUDA Neural Network From Scratch☆22Aug 28, 2024Updated last year
- ☆44Nov 1, 2025Updated 5 months ago
- A vector field rendering library☆17Jul 31, 2019Updated 6 years ago
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- EleutherAI ML Performance reading group repository (slides, meeting recordings, annotated papers)☆31Mar 20, 2026Updated 3 weeks ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Aug 31, 2023Updated 2 years ago
- Fast SGEMM emulation on Tensor Cores☆17Feb 16, 2025Updated last year
- 2014: Variational Monte Carlo for the harmonic oscillator, helium, hydrogen and H2 - IPython notebook and FORTRAN90☆13Jun 23, 2016Updated 9 years ago
- Implement Flash Attention using Cute.☆105Dec 17, 2024Updated last year
- ☆20Mar 3, 2026Updated last month
- PyTorch DL Tutorial using Torchsample☆11May 2, 2017Updated 8 years ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 6 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 8 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆21Sep 3, 2024Updated last year
- Stochastic Series Expansion (SSE) for a isotropic S=1/2 antiferromagnetic quantum Heisenberg model in 1D, 2D or 3D lattice . Every lattic…☆15Jan 23, 2021Updated 5 years ago
- ☆119May 16, 2025Updated 10 months ago
- ☆91Feb 29, 2024Updated 2 years ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆11Jul 27, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆98Sep 19, 2025Updated 6 months ago
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 7 months ago
- 数据库内核笔记☆13Aug 18, 2022Updated 3 years ago
- PyTorch implements `SwinIR: Image Restoration Using Swin Transformer` paper.☆14May 31, 2023Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆16Sep 7, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- ☆11May 2, 2023Updated 2 years ago