DeepLink-org / DIOPILinks
☆75Updated 11 months ago
Alternatives and similar repositories for DIOPI
Users that are interested in DIOPI are comparing it to the libraries listed below
Sorting:
- ☆70Updated 11 months ago
- ☆150Updated 9 months ago
- ☆139Updated last year
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- ☆129Updated 10 months ago
- GLake: optimizing GPU memory management and IO transmission.☆483Updated 7 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆265Updated 2 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆703Updated this week
- PyTorch distributed training acceleration framework☆53Updated 2 months ago
- A model compilation solution for various hardware☆451Updated 2 months ago
- ☆91Updated last week
- ☆59Updated 11 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆97Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆113Updated 5 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆108Updated 3 months ago
- ☆137Updated 10 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆385Updated 2 weeks ago
- ☆141Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆241Updated 3 months ago
- ☆107Updated 5 months ago
- A simple high performance CUDA GEMM implementation.☆411Updated last year
- Development repository for the Triton-Linalg conversion☆202Updated 8 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆428Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆282Updated last year
- ☆148Updated 7 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆223Updated 2 months ago
- ☆109Updated 6 months ago
- ☆507Updated last month
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆65Updated last year