Mixed precision training from scratch with Tensors and CUDA
☆30May 14, 2024Updated last year
Alternatives and similar repositories for mixed-precision-from-scratch
Users that are interested in mixed-precision-from-scratch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Dec 22, 2024Updated last year
- ☆15Mar 30, 2024Updated 2 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆142Aug 26, 2024Updated last year
- BFloat16 Fused Adam Operator for PyTorch☆19Nov 16, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆45Nov 1, 2025Updated 6 months ago
- A vector field rendering library☆17Jul 31, 2019Updated 6 years ago
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Dec 9, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- A practical way of learning Swizzle☆38Feb 3, 2025Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- EleutherAI ML Performance reading group repository (slides, meeting recordings, annotated papers)☆31Mar 20, 2026Updated last month
- Writing FLUX in Triton☆42Sep 22, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Aug 31, 2023Updated 2 years ago
- Fast SGEMM emulation on Tensor Cores☆17Feb 16, 2025Updated last year
- 2014: Variational Monte Carlo for the harmonic oscillator, helium, hydrogen and H2 - IPython notebook and FORTRAN90☆13Jun 23, 2016Updated 9 years ago
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- ☆21Apr 24, 2026Updated last week
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 7 months ago
- ☆22Sep 3, 2024Updated last year
- Stochastic Series Expansion (SSE) for a isotropic S=1/2 antiferromagnetic quantum Heisenberg model in 1D, 2D or 3D lattice . Every lattic…☆15Jan 23, 2021Updated 5 years ago
- ☆120May 16, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆91Feb 29, 2024Updated 2 years ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆11Jul 27, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆99Sep 19, 2025Updated 7 months ago
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- pytorch版基于gpt+nezha的中文多轮Cdial☆11Oct 22, 2022Updated 3 years ago
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 7 months ago
- TensorRT encapsulation, learn, rewrite, practice.☆29Oct 19, 2022Updated 3 years ago
- 数据库内核笔记☆13Aug 18, 2022Updated 3 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- PyTorch implements `SwinIR: Image Restoration Using Swin Transformer` paper.☆14May 31, 2023Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆16Sep 7, 2024Updated last year
- llama.cpp to PyTorch Converter☆38Apr 8, 2024Updated 2 years ago
- ☆11May 2, 2023Updated 3 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆12Jun 10, 2024Updated last year
- Fastest kernels written from scratch☆576Sep 18, 2025Updated 7 months ago