mrzhuzhe / rivenLinks
CPU Memory Compiler and Parallel programing
☆26Updated last year
Alternatives and similar repositories for riven
Users that are interested in riven are comparing it to the libraries listed below
Sorting:
- A tutorial for CUDA&PyTorch☆208Updated this week
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Updated 3 months ago
- ☆144Updated last year
- ☆49Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- ☆26Updated 5 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆54Updated last year
- 大规模并行处理器编程实战 第二版答案☆34Updated 3 years ago
- ☆38Updated last year
- ☆21Updated 4 years ago
- ☆120Updated last year
- ☆159Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆269Updated 6 months ago
- ☆20Updated last year
- ☆70Updated last year
- ☆119Updated 9 months ago
- ☆156Updated last year
- Optimize softmax in triton in many cases☆22Updated last year
- Implement custom operators in PyTorch with cuda/c++☆76Updated 3 years ago
- 分层解耦的深度学习推理引擎☆79Updated 11 months ago
- ☆43Updated 4 years ago
- ☆112Updated 8 months ago
- A light llama-like llm inference framework based on the triton kernel.☆169Updated 3 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆145Updated 8 months ago
- CUDA 6大并行计算模式 代码与 笔记☆61Updated 5 years ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆135Updated 2 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆71Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆246Updated last week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago