andrewkchan / yalmLinks
Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O
☆539Updated 3 months ago
Alternatives and similar repositories for yalm
Users that are interested in yalm are comparing it to the libraries listed below
Sorting:
- Perplexity GPU Kernels☆539Updated last month
- kernels, of the mega variety☆631Updated 2 months ago
- Materials for learning SGLang☆682Updated 2 weeks ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 7 months ago
- ☆1,087Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆923Updated last month
- CUDA/Metal accelerated language model inference☆625Updated 6 months ago
- Cataloging released Triton kernels.☆277Updated 3 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆790Updated 9 months ago
- Fast low-bit matmul kernels in Triton☆407Updated 3 weeks ago
- Fastest kernels written from scratch☆499Updated 3 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆448Updated 6 months ago
- GPU documentation for humans☆416Updated last week
- A Quirky Assortment of CuTe Kernels☆687Updated last week
- ☆262Updated last week
- flash attention tutorial written in python, triton, cuda, cutlass☆459Updated 7 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,023Updated 11 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆571Updated 2 weeks ago
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- ☆127Updated last month
- Helpful kernel tutorials and examples for tile-based GPU programming☆412Updated last week
- Applied AI experiments and examples for PyTorch☆311Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆349Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆961Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆239Updated last month
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆725Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆456Updated 2 months ago
- Fast CUDA matrix multiplication from scratch☆979Updated 3 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆415Updated last month