sunkx109 / My-Torch-ExtensionLinks
A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.
☆38Updated last year
Alternatives and similar repositories for My-Torch-Extension
Users that are interested in My-Torch-Extension are comparing it to the libraries listed below
Sorting:
- learning how CUDA works☆366Updated 10 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆473Updated 8 months ago
- Examples of CUDA implementations by Cutlass CuTe☆266Updated 6 months ago
- A light llama-like llm inference framework based on the triton kernel.☆168Updated last week
- ☆150Updated 6 months ago
- ☆144Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆114Updated 6 months ago
- ☆114Updated 3 months ago
- ☆70Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆590Updated 2 weeks ago
- A CUDA tutorial to make people learn CUDA program from 0☆264Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆63Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆242Updated last month
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆52Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆403Updated this week
- ☆39Updated 8 months ago
- Implement Flash Attention using Cute.☆100Updated last year
- how to learn PyTorch and OneFlow☆469Updated last year
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆139Updated 5 months ago
- A simple high performance CUDA GEMM implementation.☆423Updated 2 years ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆205Updated last month
- ☆158Updated 2 months ago
- ☆112Updated 7 months ago
- Implement custom operators in PyTorch with cuda/c++☆76Updated 3 years ago
- ☆281Updated 2 months ago
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆59Updated 8 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆515Updated last year
- some hpc project for learning☆26Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year