gf712 / gpt2-cppLinks
GPT2 implementation in C++ using Ort
☆26Updated 5 years ago
Alternatives and similar repositories for gpt2-cpp
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
Sorting:
- Experiments with BitNet inference on CPU☆55Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆51Updated 11 months ago
- Inference of Mamba and Mamba2 models in pure C☆196Updated 2 weeks ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated 2 years ago
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago
- RWKV in nanoGPT style☆197Updated last year
- LLM training in simple, raw C/CUDA☆112Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆306Updated last year
- Python bindings for ggml☆147Updated last year
- Inference Llama 2 in one file of pure C++☆87Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- ☆70Updated 2 years ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆202Updated 4 months ago
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆266Updated 2 years ago
- Port of Meta's Encodec in C/C++☆227Updated last year
- ☆125Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- Inference RWKV v7 in pure C.☆44Updated 3 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆143Updated 3 months ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated 2 years ago
- Clover: Quantized 4-bit Linear Algebra Library☆114Updated 7 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated last week
- minimal C implementation of speculative decoding based on llama2.c☆25Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆54Updated last year
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆57Updated last year
- Universal cross-platform tokenizers binding to HF and sentencepiece☆451Updated 2 weeks ago