Bruce-Lee-LY / memory_poolLinks
Simple and efficient memory pool is implemented with C++11.
☆8Updated 3 years ago
Alternatives and similar repositories for memory_pool
Users that are interested in memory_pool are comparing it to the libraries listed below
Sorting:
- A practical way of learning Swizzle☆20Updated 4 months ago
- ☆21Updated 4 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated 8 months ago
- ☆17Updated last year
- A TVM-like CUDA/C code generator.☆9Updated 3 years ago
- 大规模并行处理器编程实战 第二版答案☆32Updated 3 years ago
- 分层解耦的深度学习推理引擎☆73Updated 3 months ago
- ☆33Updated last year
- study of cutlass☆21Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆79Updated 3 weeks ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 6 months ago
- ☆18Updated 2 months ago
- ☆22Updated 2 months ago
- ☆73Updated 3 weeks ago
- CPU Memory Compiler and Parallel programing☆26Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 2 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆30Updated last year
- ☆112Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆62Updated 8 months ago
- ☆15Updated 6 years ago
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- Optimize GEMM with tensorcore step by step☆26Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆68Updated 9 months ago
- ☆14Updated 9 months ago
- ☆134Updated last year
- Implement Flash Attention using Cute.☆85Updated 5 months ago
- A tutorial for CUDA&PyTorch☆142Updated 4 months ago
- ☆11Updated 3 months ago