gf712 / gpt2-cpp
GPT2 implementation in C++ using Ort
☆24Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for gpt2-cpp
- LLM training in simple, raw C/CUDA☆87Updated 6 months ago
- Experiments with BitNet inference on CPU☆50Updated 7 months ago
- Inference Llama 2 in one file of pure C++☆80Updated last year
- Tiny C++11 GPT-2 inference implementation from scratch☆48Updated 10 months ago
- Inference of Mamba models in pure C☆179Updated 8 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆51Updated 4 months ago
- RWKV in nanoGPT style☆178Updated 5 months ago
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆30Updated last year
- Train your own small bitnet model☆56Updated last month
- GGML implementation of BERT model with Python bindings and quantization.☆51Updated 9 months ago
- ☆52Updated last year
- Python bindings for ggml☆132Updated 2 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆234Updated 7 months ago
- ☆123Updated 11 months ago
- tinygrad port of the RWKV large language model.☆43Updated 5 months ago
- ☆101Updated last month
- Port of Suno AI's Bark in C/C++ for fast inference☆54Updated 7 months ago
- Stable Diffusion in pure C/C++☆60Updated last year
- RWKV, in easy to read code☆55Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆94Updated this week
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆307Updated 9 months ago
- A converter and basic tester for rwkv onnx☆42Updated 9 months ago
- Example of applying CUDA graphs to LLaMA-v2☆10Updated last year
- asynchronous/distributed speculative evaluation for llama3☆37Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆52Updated 11 months ago
- Inference Llama 2 in one file of pure C & one file with CUDA☆17Updated last year
- minimal C implementation of speculative decoding based on llama2.c☆17Updated 4 months ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆41Updated last month
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆187Updated this week