sunkx109 / My-Torch-ExtensionLinks
A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.
☆33Updated last year
Alternatives and similar repositories for My-Torch-Extension
Users that are interested in My-Torch-Extension are comparing it to the libraries listed below
Sorting:
- learning how CUDA works☆295Updated 5 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆402Updated 2 months ago
- ☆137Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆101Updated 3 weeks ago
- A light llama-like llm inference framework based on the triton kernel.☆144Updated last week
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.🎉☆194Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆214Updated last month
- A collection of memory efficient attention operators implemented in the Triton language.☆275Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆446Updated 8 months ago
- ☆140Updated last month
- ☆67Updated 7 months ago
- ☆128Updated 8 months ago
- A CUDA tutorial to make people learn CUDA program from 0☆247Updated last year
- Implement Flash Attention using Cute.☆92Updated 7 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆370Updated 10 months ago
- Implement custom operators in PyTorch with cuda/c++☆66Updated 2 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆45Updated 11 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆78Updated 3 months ago
- how to learn PyTorch and OneFlow☆445Updated last year
- A simple high performance CUDA GEMM implementation.☆392Updated last year
- ☆33Updated 2 months ago
- ☆91Updated 2 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆151Updated 2 weeks ago
- ☆145Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago
- ☆24Updated 4 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆74Updated 11 months ago
- ☆171Updated last year
- Parallel Prefix Sum (Scan) with CUDA☆24Updated last year
- ☆59Updated 8 months ago