Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O
☆561Sep 13, 2025Updated 6 months ago
Alternatives and similar repositories for yalm
Users that are interested in yalm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- CUDA/Metal accelerated language model inference☆630May 29, 2025Updated 9 months ago
- CPU inference for the DeepSeek family of large language models in C++☆315Oct 2, 2025Updated 5 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆65Nov 8, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Mar 21, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,022Updated this week
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆271Jul 1, 2025Updated 8 months ago
- Material for gpu-mode lectures☆5,865Feb 1, 2026Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,082Updated this week
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- Learnings and programs related to CUDA☆435Jun 29, 2025Updated 8 months ago
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆884Updated this week
- ☆44Nov 1, 2025Updated 4 months ago
- how to optimize some algorithm in cuda.☆2,887Updated this week
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆236Jan 14, 2026Updated 2 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Materials for learning SGLang☆785Jan 5, 2026Updated 2 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆647Mar 17, 2026Updated last week
- 分层解耦的深度学习推理引擎☆79Feb 17, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Mar 20, 2026Updated last week
- Step-by-step optimization of CUDA SGEMM☆448Mar 30, 2022Updated 3 years ago
- InfiniTensor is a high-performance inference engine tailored for GPUs and AI accelerators. Its design focuses on effective deployment and…☆313Mar 16, 2026Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆255Feb 13, 2026Updated last month
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆409Jan 2, 2025Updated last year
- Nano vLLM☆12,353Nov 3, 2025Updated 4 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 9 months ago
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated 11 months ago
- CUDA 6大并行计算模式 代码与笔记☆61Jul 30, 2020Updated 5 years ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Mar 21, 2026Updated last week
- ☆53Feb 24, 2026Updated last month
- Tensor library for machine learning☆14,294Mar 16, 2026Updated last week