ReinForce-II / mmapeakLinks
☆28Updated 2 months ago
Alternatives and similar repositories for mmapeak
Users that are interested in mmapeak are comparing it to the libraries listed below
Sorting:
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆70Updated 6 months ago
- Gpu benchmark☆63Updated 4 months ago
- ☆70Updated 6 months ago
- ☆17Updated 6 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 4 months ago
- ☆137Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆45Updated 4 months ago
- Samples of good AI generated CUDA kernels☆83Updated 3 weeks ago
- ☆68Updated this week
- ☆88Updated last year
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆129Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆43Updated 10 months ago
- NVIDIA Linux open GPU with P2P support☆26Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 6 months ago
- ring-attention experiments☆144Updated 8 months ago
- Inference RWKV v7 in pure C.☆34Updated 2 months ago
- QuIP quantization☆54Updated last year
- A collection of tricks and tools to speed up transformer models☆167Updated 3 weeks ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆426Updated last month
- ☆143Updated 7 months ago
- KV cache compression for high-throughput LLM inference☆131Updated 4 months ago
- 👷 Build compute kernels☆68Updated this week
- Inference of Mamba models in pure C☆187Updated last year
- 1.58-bit LLaMa model☆81Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 3 months ago