gf712 / gpt2-cppLinks
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
Sorting:
- Experiments with BitNet inference on CPU☆55Updated last year
- Inference Llama 2 in one file of pure C++☆87Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- LLM training in simple, raw C/CUDA☆109Updated last year
- Inference of Mamba models in pure C☆196Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 10 months ago
- ☆125Updated 2 years ago
- RWKV in nanoGPT style☆197Updated last year
- ☆70Updated 2 years ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆304Updated last year
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆39Updated last year
- Python bindings for ggml☆146Updated last year
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago
- GGUF parser in Python☆28Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆141Updated 2 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- Tiny C++ LLM inference implementation from scratch☆97Updated last month
- A minimalistic C++ Jinja templating engine for LLM chat templates☆202Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- A converter and basic tester for rwkv onnx☆43Updated last year
- High-Performance SGEMM on CUDA devices☆115Updated 11 months ago
- qwen2 and llama3 cpp implementation☆49Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- Step by step explanation/tutorial of llama2.c☆225Updated 2 years ago
- ☆11Updated 2 years ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- minimal C implementation of speculative decoding based on llama2.c☆26Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated last week