Kedreamix / pytorch-cppcuda-tutorialLinks
tutorial for writing custom pytorch cpp+cuda kernel, applied on volume rendering (NeRF)
☆28Updated last year
Alternatives and similar repositories for pytorch-cppcuda-tutorial
Users that are interested in pytorch-cppcuda-tutorial are comparing it to the libraries listed below
Sorting:
- Tutorials for writing high-performance GPU operators in AI frameworks.☆131Updated 2 years ago
- CPU Memory Compiler and Parallel programing☆26Updated 10 months ago
- Implement custom operators in PyTorch with cuda/c++☆71Updated 2 years ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆181Updated last month
- ☆107Updated last month
- Implement Flash Attention using Cute.☆95Updated 9 months ago
- ☆172Updated 2 years ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆116Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆43Updated 3 months ago
- Codes & examples for "CUDA - From Correctness to Performance"☆111Updated 11 months ago
- Quantized Attention on GPU☆44Updated 10 months ago
- ☆139Updated last year
- 这个项目介绍了简单的CUDA入门,涉及到CUDA执行模型、线程层次、CUDA内存模型、核函数的编写方式以及PyTorch使用CUDA扩展的两种方式。通过该项目可以基本入门基于PyTorch的CUDA扩展的开发方式。☆94Updated 3 years ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆17Updated 3 weeks ago
- A sparse attention kernel supporting mix sparse patterns☆303Updated 7 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆218Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated 10 months ago
- A tutorial for CUDA&PyTorch☆156Updated 8 months ago
- Optimize softmax in triton in many cases☆21Updated last year
- A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.☆33Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆107Updated 2 months ago
- ☆42Updated last year
- LLM Inference with Deep Learning Accelerator.☆50Updated 8 months ago
- Code release for book "Efficient Training in PyTorch"☆101Updated 5 months ago
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆75Updated last month
- ☆143Updated 2 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆420Updated 4 months ago
- ☆106Updated 4 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆42Updated 3 weeks ago
- DeepSeek Native Sparse Attention pytorch implementation☆96Updated last month