tgautam03 / xGeMM
Accelerated General (FP32) Matrix Multiplication from scratch in CUDA
☆106Updated 2 months ago
Alternatives and similar repositories for xGeMM:
Users that are interested in xGeMM are comparing it to the libraries listed below
- Learning about CUDA by writing PTX code.☆124Updated last year
- Multi-Threaded FP32 Matrix Multiplication on x86 CPUs☆341Updated last month
- Notes on "Programming Massively Parallel Processors" by Hwu, Kirk, and Hajj (4th ed.)☆52Updated 7 months ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆250Updated 4 months ago
- pytorch from scratch in pure C/CUDA and python☆40Updated 5 months ago
- GPT-2 in C☆65Updated 2 months ago
- This repository is a curated collection of resources, tutorials, and practical examples designed to guide you through the journey of mast…☆302Updated last month
- ☆46Updated 7 months ago
- Learnings and programs related to CUDA☆328Updated last month
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆306Updated last week
- The Tensor (or Array)☆427Updated 7 months ago
- Alex Krizhevsky's original code from Google Code☆190Updated 9 years ago
- A c/c++ implementation of micrograd: a tiny autograd engine with neural net on top.☆65Updated last year
- ☆42Updated 2 weeks ago
- Visualization of cache-optimized matrix multiplication☆105Updated this week
- ☆232Updated 2 months ago
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆129Updated 4 months ago
- Tutorials on tinygrad☆355Updated 3 weeks ago
- Some CUDA example code with READMEs.☆90Updated 3 weeks ago
- parallelized hyperdimensional tictactoe☆114Updated 6 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆174Updated last year
- a Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization in pure C.☆21Updated 8 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆145Updated 9 months ago
- could we make an ml stack in 100,000 lines of code?☆30Updated 8 months ago