ReinForce-II / mmapeakLinks
☆43Updated 2 weeks ago
Alternatives and similar repositories for mmapeak
Users that are interested in mmapeak are comparing it to the libraries listed below
Sorting:
- NVIDIA Linux open GPU with P2P support☆60Updated last week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 8 months ago
- Gpu benchmark☆69Updated 8 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆160Updated this week
- ☆152Updated 3 months ago
- AI Tensor Engine for ROCm☆285Updated last week
- ☆76Updated 9 months ago
- Fast and memory-efficient exact attention☆193Updated this week
- Samples of good AI generated CUDA kernels☆91Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆171Updated this week
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆548Updated last month
- Inference RWKV v7 in pure C.☆40Updated last week
- kernels, of the mega variety☆586Updated 3 weeks ago
- AMD related optimizations for transformer models☆90Updated last month
- Development repository for the Triton language and compiler☆135Updated this week
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- ☆218Updated 8 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- ☆17Updated 10 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 3 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆338Updated last week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- ☆102Updated this week
- A Quirky Assortment of CuTe Kernels☆627Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆698Updated 2 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- scalable and robust tree-based speculative decoding algorithm☆359Updated 8 months ago
- Learning about CUDA by writing PTX code.☆144Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year