keith2018 / TinyTorch
A tiny deep learning training framework implemented from scratch in C++ that follows PyTorch's API.
☆49Updated last month
Alternatives and similar repositories for TinyTorch
Users that are interested in TinyTorch are comparing it to the libraries listed below
Sorting:
- Tiny C++11 GPT-2 inference implementation from scratch☆58Updated this week
- 分层解耦的深度学习推理引擎☆73Updated 3 months ago
- ☆70Updated 2 years ago
- A PyTorch-like deep learning framework. Just for fun.☆154Updated last year
- Machine Learning Compiler Road Map☆44Updated last year
- Implement Flash Attention using Cute.☆82Updated 5 months ago
- Implement custom operators in PyTorch with cuda/c++☆61Updated 2 years ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated last month
- Codes & examples for "CUDA - From Correctness to Performance"☆98Updated 6 months ago
- ☆123Updated last year
- x86-64 SIMD矢量优化系列教程☆118Updated last month
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- 大规模并行处理器编程实战 第二版答案☆32Updated 2 years ago
- A tutorial for CUDA&PyTorch☆140Updated 3 months ago
- Examples and exercises from the book Programming Massively Parallel Processors - A Hands-on Approach. David B. Kirk and Wen-mei W. Hwu (T…☆66Updated 4 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆56Updated 6 months ago
- ☆27Updated 11 months ago
- 先进编译实验室的个人主页☆86Updated 3 weeks ago
- Personal Notes for Learning HPC & Parallel Computation [Active Adding New Content]☆66Updated 2 years ago
- ☆22Updated last month
- ☆274Updated 4 years ago
- A light llama-like llm inference framework based on the triton kernel.☆118Updated this week
- easy cuda code☆71Updated 4 months ago
- 📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.☆174Updated last week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆181Updated 3 months ago
- A simple high performance CUDA GEMM implementation.☆366Updated last year
- 解读cudnn文档,掌握其用法☆19Updated last year
- ☆237Updated 3 months ago
- CUDA 6大并行计算模式 代码与笔记☆61Updated 4 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago