keith2018 / TinyGPTLinks
Tiny C++ LLM inference implementation from scratch
☆66Updated last month
Alternatives and similar repositories for TinyGPT
Users that are interested in TinyGPT are comparing it to the libraries listed below
Sorting:
- Efficient inference of large language models.☆149Updated last month
- 分层解耦的深度学习推理引擎☆76Updated 8 months ago
- ☆124Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆132Updated 2 years ago
- 使用 CUDA C++ 实现的 llama 模型推理 框架☆62Updated 11 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆87Updated 6 months ago
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆52Updated 4 years ago
- A tiny deep learning training framework implemented from scratch in C++ that follows PyTorch's API.☆106Updated last week
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated 2 years ago
- ☆33Updated last year
- llama 2 Inference☆43Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆201Updated 3 weeks ago
- b站上的课程☆76Updated 2 years ago
- Simple and efficient memory pool is implemented with C++11.☆10Updated 3 years ago
- qwen2 and llama3 cpp implementation☆47Updated last year
- a simple general program language☆99Updated 2 months ago
- llm deploy project based onnx.☆45Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆226Updated 2 months ago
- ☆21Updated 4 years ago
- ☆27Updated last year
- ☆39Updated last week
- Free resource for the book AI Compiler Development Guide☆47Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 4 months ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆124Updated 5 months ago
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆38Updated last year
- A minimal, easy-to-read PyTorch reimplementation of the Qwen3 and Qwen2.5 VL with a fancy CLI☆179Updated last month
- SGEMM optimization with cuda step by step☆21Updated last year
- CUDA 6大并行计算模式 代码与笔记☆61Updated 5 years ago