gf712 / gpt2-cppLinks
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
Sorting:
- Inference Llama 2 in one file of pure C++☆83Updated 2 years ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 7 months ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- LLM training in simple, raw C/CUDA☆105Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆296Updated last year
- Inference of Mamba models in pure C☆191Updated last year
- RWKV in nanoGPT style☆192Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- Python bindings for ggml☆146Updated last year
- ☆124Updated last year
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆36Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆263Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆189Updated 3 weeks ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆138Updated last year
- Train your own small bitnet model☆75Updated 11 months ago
- Tiny C++ LLM inference implementation from scratch☆66Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆105Updated 2 years ago
- Clover: Quantized 4-bit Linear Algebra Library☆113Updated 7 years ago
- tinygrad port of the RWKV large language model.☆44Updated 7 months ago
- Inference RWKV v7 in pure C.☆40Updated this week
- Step by step explanation/tutorial of llama2.c☆224Updated 2 years ago
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆343Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- Inference RWKV with multiple supported backends.☆60Updated this week
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆221Updated 9 months ago
- ☆69Updated 2 years ago