GaoYusong / llm.cppLinks
A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.
☆42Updated last year
Alternatives and similar repositories for llm.cpp
Users that are interested in llm.cpp are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆112Updated last year
- Tiny C++ LLM inference implementation from scratch☆102Updated last week
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- A GPU-driven system framework for scalable AI applications☆124Updated last year
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆550Updated 4 months ago
- Learning about CUDA by writing PTX code.☆152Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆51Updated 11 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆203Updated 4 months ago
- ☆96Updated 10 months ago
- Our first fully AI generated deep learning system☆481Updated this week
- GPU documentation for humans☆518Updated last week
- Hand-Rolled GPU communications library☆82Updated 2 months ago
- Inference Llama 2 in one file of pure C++☆87Updated 2 years ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆306Updated last year
- Fast and memory-efficient exact attention☆111Updated last week
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- Header-only safetensors loader and saver in C++☆78Updated last month
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implement…☆25Updated 4 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆201Updated this week
- CUDA/Metal accelerated language model inference☆626Updated 8 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- Simple MPI implementation for prototyping or learning☆300Updated 6 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆143Updated 3 months ago
- ☆71Updated 10 months ago
- ☆137Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year
- CPU inference for the DeepSeek family of large language models in C++☆315Updated 4 months ago
- Class of High Performance Computing taken at U.T.P 2017☆106Updated 8 years ago
- GPT2 implementation in C++ using Ort☆26Updated 5 years ago